chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
ee6e550d97a0157d | Take the 2-minute tour ×
In his book "Einstein's mistakes" H. C. Ohanian, the author, holds that Einstein delivered 7 proofs for $E=mc^2$ in his life that all were in some way incorrect. This despite the fact that correct proves had been published and mistakes in his proofs were sometimes pointed out to him.
The first proof e.g. contains a circular line of thought in that it falsely assumes special relativity to be compatible with rigid bodies.
Reference: stated by V. Icke, prof. theoretical astronomy at Leiden University in his (Dutch) book 'Niks relatief': "Einstein made an error in his calculation of 1905".
I found a reference on the internet discussing rigid body motion in special relativity. It quotes from Einsteins 1905 paper: “Let there be given a stationary rigid rod ...”. The referenced paper shows that dealing with rigid bodies in special relativity is at least really complicated, so one could argue that a proof for $E=mc^2$ should not use a rigid body.
Do you think Ohanian's statement is true or was/is he biased in his opinion?
share|improve this question
closed as not constructive by Manishearth May 23 '13 at 21:11
Terence Tao has written on Einstein's derivation here terrytao.wordpress.com/2007/12/28/einsteins-derivation-of-emc2 – j.c. Nov 24 '10 at 22:12
(alert: rhetorical question) do you think Ohanian was right? And more importantly, can you back up your opinion with a reasoned argument? If you have a specific objection to one of Einstein's derivations of $E = mc^2$, you can certainly ask about that here, but this is not the place to poll the community to see who agrees with such-and-such an opinion. Also, asking whether a certain person was biased or not is not a physics question; it strikes me as more of a history question. – David Z Nov 27 '10 at 4:14
I'm not sure of the merits of this question, but it does seem that the word "arrogant" is being thrown around a lot in the answers. I think we can all agree that all physicists, as a rule of thumb, are arrogant to some degree (or considered to be so by the general public) if only by virtue of the fact that our resumes contains references to such things as grand unification and theory of everything. So it would be nice if we could stick to debating this question on its merits alone. Just my contribution from the peanut gallery. – user346 Jan 13 '11 at 18:18
I'm downvoting the question. It tells us there's a book that contains a specific argument, and asks us for our opinions on that argument. But we don't have access to the book, and most of us don't speak Dutch, so we wouldn't be able to read the book if we did have access to it. The question can't be answered unless Gerard tells us what the mysterious argument is. In general, I have not been impressed with the material I've seen from the Ohanian book. Specifically, the discussion of length contraction and W.F.G. Swann is totally bogus. – Ben Crowell Aug 14 '11 at 21:38
There are some strange inconsistencies in the question. "The first proof" would have to refer to the 1905 paper titled "Does the inertia of a body depend upon its energy content?," but this date would contradict the 1906 date in the title of the Icke book. In discussion below, Gerard says, "His proof involves a photon that hits a rigid body." The 1905 paper doesn't involve photons at all, it discusses only emission of light, not absorption, and it never mentions a rigid body. – Ben Crowell Aug 15 '11 at 0:00
6 Answers 6
I will exaggerate a bit, but in physics, proof in the sense of mathematical proof is irrelevant. Even if all of Einstein's deductions of the formula were wrong, it still turns out that empirical evidence supports $E=mc^2$.
Now, without the exaggeration, mathematical deduction is important in physical theories because it shows us how conclusions and principles hang together. This can be important when elaborating further theories. Imagine, for the sake of argument, it turns out that the relativity principle is not correct, yet $E=mc^2$. Since usually we deduce the latter from the former, there is something interesting here, it would mean that $E=mc^2$ is more fundamental than the principle of relativity. Special relativity arises itself from this kind of considerations. Realizing that the invariance of the speed of light is more fundamental than the invariance of time, the latter being only approximately true at very low speeds.
EDIT: I still wasn't able to read what Ohanian is saying in particular, but it is no secret that Einstein was not a great mathematician. For instance, if it was not for the help of his friend Marcel Grossmann, Einstein might never have been able to develop the theory of general relativity. From his intuition about the equivalence principle in 1905 to the actual GR in 1915, he had to toil for 10 years with non-Euclidean geometry. In the meantime, he nearly got overtaken by David Hilbert. (See Marek's comment.)
share|improve this answer
Corrections: Einstein was far from a lousy mathematician, he just wasn't a great one. Also, Hilbert was nowhere close. He was a (rather arrogant) mathematician, and a somewhat mediocre physicist. – Noldorin Nov 24 '10 at 19:12
@Raskolnikov: My oh my, what is this with all the anti-Einstein and anti-Hilbert sentiment? Almost nothing that has been said here is true. Einstein wasn't overtaken by Hilbert at all. Hilbert just wrote the E-H action, but this was only in 1915 when he was already familiar with all the Einstein's work. Not only that but he himself admitted all the credit to Einstein. – Marek Nov 26 '10 at 0:35
@Noldorin: Seem to me that you're quite quick to call people arrogant without actually knowing anything about them. What Hilbert said is actually a common knowledge and he obviously meant only the fact that many physicists are not really able to handle mathematics required for the modern physics and that they don't care for proofs and formal correctness. Actually, it can also be said that "mathematics is too hard for mathematicians" because many of them don't have the physical intuition :-) In any case, take it easy and don't be so quick to judge others ;-) – Marek Nov 26 '10 at 0:44
@Noldorin: nobody's telling you what to do. I am just saying that you're a little uptight and it wouldn't hurt if you took things a little easier. It is just my advice and you're certainly free to ignore it; but you can count on me arguing with you again simply because I don't like your judgemental behavior. I don't think one has to be a moderator to point out obvious flaws in your argumentation. – Marek Nov 26 '10 at 17:43
And just a little note, @Noldorin: when I told the Hilbert's statement to my theoretical physics friends some time ago, all of them laughed and agreed. I assume same situation happened long ago with Hilbert's physicist friends. It's quite a pity you see arrogance here. But of course, you are free to hate whomever you want, me and Hilbert included :-) – Marek Nov 26 '10 at 17:46
I remember reading Einstein's original paper on this, and it seemed to be argued pretty clearly. I believe he considers a scenario involving the emission and absorption of photons, and uses the length/time dilation factors to get an expression for energy, which hey then takes to the classical (Newtonian) limit and equates with $\frac{1}{2}mv^2$ to show the relation.
share|improve this answer
The proof in the original paper assumes the existance of rigid bodies, so that proof cannot suffice! – Gerard Nov 24 '10 at 22:57
It's easily extended to any sort of body though. – Noldorin Nov 25 '10 at 0:33
A circular line of thought is quite a serious error in a proof. It can be fixed, but still ... – Gerard Nov 25 '10 at 15:25
You're accusing Einstein of circular thought? Wow, you have nerve. – Noldorin Nov 25 '10 at 17:46
@Gerard: I've gone through the argument and haven't found any reference to a rigid body. Several other people here have also failed to find any reference to a rigid body. Nobody has posted a specific description of where in the paper any such assumption is made. If you detect such a hidden assumption, please tell us where you think it is. – Ben Crowell Aug 14 '11 at 21:31
In special relativity, it's only accelerating bodies which are not allowed to be rigid. Non-accelerating bodies don't have any forces on them, so there is no obstacle to their retaining the same shape. I haven't checked, but I believe Einstein's intuitive derivation of relativity didn't involve any accelerating bodies.
share|improve this answer
His proof involves a photon that hits a rigid body. – Gerard Nov 25 '10 at 18:18
... which then accelerates it an infinitesimal amount, thus making it infinitesimally non-rigid? If you stick in some $\epsilon$'s and $\delta$'s, this should even be a good enough proof for mathematicians. Of course, he didn't, so I guess it's not quite a rigorous proof, but by physics standards it definitely passes. – Peter Shor Nov 25 '10 at 19:46
See my edit for a reference from a professor in theoretical astronomy and strong advocate of A.E. Can you prove him wrong? – Gerard Nov 26 '10 at 8:15
@Gerard : you should either restate the argument of Vincent Icke, and not arrogantly point @Peter to a book written in Dutch, expecting that @Peter learns Dutch (if he doesn't already speaks it), reads the book, find in it the place where you think the argument is stated, and refute it. He already gave you a quite explicit explanation on why the rigid body problem is not so difficult. I don't think your bad usage of the authority argument has place here, and I'm definitely sure it has no place against Peter. – Frédéric Grosshans Dec 3 '10 at 13:49
@Gerard: "His proof involves a photon that hits a rigid body." You (and Icke) seem to be referring to a different argument, not the 1905 one. See my comments on your question. – Ben Crowell Aug 15 '11 at 1:09
Einstein's proof did not rely on having a rigid body. It relies on having a body with mass (obviously). To be more clear.
• The paper only says body.
• It does not rely on any rigid body property (like the size)
• It does not rely on any relativistic speed or condition on the body
The proof merely involve how energy is measured by an observer stationary with the body and one stationary with the emitted waves.
Let there be a stationary body in the system $(x, y, z)$, and let its energy--referred to the system $(x, y, z)$ be $E_0$. Let the energy of the body relative to the system $(\xi,\eta,\zeta)$ moving as above with the velocity $v$, be $H_0$.
In fact the paper is really easy and clear... :-)
share|improve this answer
nonsense, the paper is not really easy and clear, hence the controversy. – Physiks lover Nov 14 '12 at 0:25
Oh, you think it's really easy and clear? I suggest you have a proper read of it! – Larry Harson Jan 28 '13 at 16:20
The argument is summarized on the Wikipedia page "Mass/Energy equivalence", and it goes like this: imagine a body at rest which then emits two equal photons, one to the right and one to the left. In the rest frame, the body is still at rest because the photons have equal and oppsite momentum.
Shifting to a frame moving to the right. In this frame, the photon moving to the left is blueshifted and carries more momentum, and the photon moving to the left is redshifted and carries less momentum. This means that the object has lost some right momentum after the emission. The velocity is the same before and after, because the velocity is the same in the rest frame, so how could the body lose momentum without changing its velocity? It must have lost mass. If you calculate the lost mass, it equals the energy of the photons divided by c2.
This argument is obviously correct, essentially rigorous (it requires a precise framework to state rigorously, but there are no imprecise assumptions). The bickering came because Einstein demonstrated this and not anyone else, everyone else thought that the mass/energy relation was E=4/3 mc^2. Soon after, Poincare realized what everyone's mistake was.
Planck also discovers E=mc^2, and published after Einstein (but I would bet his work was independent). He just refuses to accept that Einstein's argument is correct, and says his argument is the correct one. This is possibly because of his bitterness at being scooped at such an important result. From this attack come all future error claims.
Ohanian's book in general gets everything wrong. Here is a complete list of Einstein's mistakes (I put an expanded version of this on Wikipedia years ago, but it slowly got reworded, watered down, and moved. That gradual process, of course, was the work of Satan):
Einstein's mistakes
• 1905: In the original German version of the special relativity pape, Einstein gives the transverse mass as $ m/(1 - v^2/c^2)$, while the actual value is $ m/\sqrt{1 - v^2/c^2}$ (Max Planck corrected this).
• 1905: In his PhD dissertation, the friction in dilute solutions has an miscalculated numerical prefactor, which makes the estimate of Avogadro’s number off by a factor of 3. The mistake is corrected by Einstein in a later publication.
• 1905: An expository paper explaining how airplanes fly includes an example which is incorrect. There is a wing which he claims will generate lift. This wing is flat on the bottom, and flat on the top, with a small bump at the center. It is designed to generate lift by Bernoulli’s principle, and Einstein claims that it will. Simple action reaction considerations, though, show that the wing will not generate lift, at least if it is long enough.
• 1913: Einstein started writing papers based on his belief that the hole argument made general covariance impossible in a theory of gravity. Einstein realized he was wrong in 1915, and finds General Relativity.
• 1922: Einstein published a qualitative theory of superconductivity based on the vague idea of electrons quantum-mechanically shared in orbits. This paper predated modern quantum mechanics, and is well understood to be completely wrong. Einstein's paper is more of an old-quantum-mechanical version of the modern explanation of ordinary conductivity.
• 1937: Einstein believed that the focusing properties of geodesics in general relativity would lead to an instability which causes plane gravitational waves to collapse in on themselves. While this is true to a certain extent in some limits, because gravitational instabilities can lead to a concentration of energy density into black holes, for plane waves of the type Einstein and Rosen considered in their paper, the instabilities are under control. Einstein retracted this position a short time later, but his collaborator Nathan Rosen maintained that gravitational waves are unstable until his death.
• 1939: Einstein denied several times that black holes could form, the last time in print. He published a paper that argues that a star collapsing would spin faster and faster, spinning at the speed of light with infinite energy well before the point where it is about to collapse into a black hole. This paper received no citations, and the conclusions are well understood to be wrong. Einstein’s argument itself is inconclusive, since he only shows that stable spinning objects have to spin faster and faster to stay stable before the point where they collapse. But it is well understood today (and was understood well by some even then) that collapse cannot happen through stationary states the way Einstein imagined.
There's other mistakes that are not mistakes, but philosophical things:
• In the Bohr-Einstein debates and the papers following this, Einstein tries to poke holes in the uncertainty principle, ingeniously, but unsuccessfully.
• In the EPR paper, Einstein concludes that quantum mechanics must be replaced by local hidden variables. The measured violations of Bell’s inequality show that hidden variables, if they exist, must be nonlocal.
Einstein considered the cosmological constant a mistake, but the cosmological constant is necessary within general relativity as it is currently understood, and it is widely believed to have a nonzero value today.
He had lapses in taste too, usually quickly corrected:
• Einstein briefly flirted with transverse and longitudinal mass concepts, before rejecting them.
• Einstein initially opposed Minkowski’s geometrical formulation of special relativity, changing his mind completely a few years later.
• Based on his cosmological model, Einstein rejected expanding universe solutions by Friedman and Lemaitre as unphysical, changing his mind when the universe was shown to be expanding a few years later.
• Finding it too formal, Einstein believed that Heisenberg’s matrix mechanics was incorrect. He changed his mind when Schrödinger and others demonstrated that the formulation in terms of the Schrödinger equation, based on Einstein’s wave-particle duality was equivalent to Heisenberg’s matrices.
• Einstein rejected work on black holes by Chandrasekhar, Oppenheimer, and others, believing, along with Eddington, that collapse past the horizon (then called the ’Schwarzschild singularity’) would never happen. So big was his influence, that this opinion was not rejected until the early 1960s, almost a decade after his death.
• Einstein believed that some sort of nonlinear instability could lead to a field theory whose solutions would collapse into pointlike objects which would behave like quantum particles. This is impossible by Bell’s inequality.
It is sometimes claimed that the general line of Einstein’s reasoning in the 1905 relativity paper is flawed, or the photon paper, or one or another of the most famous papers. Those claims are all ridiculous.
share|improve this answer
Just a small point about wings. Pilots learn to think in terms of the momentum of the downdraft created by a wing. Bernoulli is only the reason why the air over the top of the wing is sucked downward. – Mike Dunlavey Sep 13 '11 at 13:42
@Mike: yes, of course, this is why Einstein's example is not very good. It is remarkable that he wasn't thinking this way in 1905. – Ron Maimon Nov 28 '11 at 8:44
The answers do not address what Ohanian said. His paper is a free download. As far as I know, Ohanian has not been refuted.
share|improve this answer
His paper is so trivial to refute, it is hardly worth the bother: he is saying that Einstein is wrong to use the nonrelativistic expression for energy in his argument, but the velocity of the object that Einstein is considering is entirely due to shifting reference frame, and the velocity of the shift can be infinitesimally small. In addition, while Einstein might have asked "what is the kinetic energy of the body" to determine the loss of mass, the system loses mass when you ask "what is the linear momentum of the body (as explained in my answer). The only relativistic thing is the light. – Ron Maimon Aug 16 '11 at 4:19
|
7e43334e62dbb4d9 | Mike Lazaridis: The power of ideas
Saturday 9 June 2012 12:22PM (view full episode)
Mike Lazaridis is known as a visionary, an innovator, and an extraordinary engineer. In this address, at the 2012 Association for the Advancement of Science meeting in Vancouver, Mike Lazaridis traces his passion for ideas and knowledge about how things work. He founded Research in Motion, the company which developed the BlackBerry. Lazaridis describes the value of fundamental research and says courage and boldness are required to continue funding research in hard economic times. The rewards can rarely be foreseen. He sites quantum mechanics and relativity as theories which would change the world, but no one at the time knew how. Nor were the benefits of the laser, the semiconductor, the computer, the internet, medical imagery, satellites or the BlackBerry appreciated when these concepts and devices were first developed.
View comments (12)
Robyn Williams: Mike Lazaridis brought us the BlackBerry, the phone that preceded all those i-things. It made him a lot of money and lead to a large firm called RIM, Research in Motion, based in Waterloo, Ontario, that makes many communication devices. Mike Lazaridis also set up the Perimeter Institute where physics is done and quantum computers may be designed. Among its alumni is Stephen Hawking. Here, Mike Lazaridis talks at the American Association for the Advancement of Science meeting in Vancouver. He's introduced by Bill Press, president-elect of the AAAS.
William Press: It's my great pleasure to introduce this inspiring visionary who, as you will learn, has a proven track record of thinking, tinkering and building the future. Mike Lazaridis is the instigator of the smart phone revolution, having launched wireless email technologies including the BlackBerry, that reshaped global communications, spawned new industries, and connected us to each other like never before.
And while we know Mike is appreciated throughout the global wireless community as an innovator, did you know that he has excelled in many other areas? For example, he holds an Emmy award and an Academy award for technical achievements tied to a device called DigiSync that he developed. Mike traces his passion for thinking, tinkering and building the future to his boyhood home where his love of science and fascination with electronics were nurtured in support of family and school environments.
This passion flourished in the Waterloo region of Ontario, which is a very entrepreneurial community, it's home to over 800 technology companies, two universities, a college and a host of world-leading research centres. This is where, while in university, Mike founded Research in Motion, which revolutionised the world with the smart phone phenomenon. And through this success Mike found himself in a position to be able to give back to the community, his country and the world, and he has very much done so.
Over the last decade, through his vision, respect for science and personal philanthropy, Mike has scaled up our ability as a society for the same thinking, tinkering and building the future. I want to give you just three examples. Mike founded the Perimeter Institute for Theoretical Physics, an international hub of people who think about our world at the most fundamental level. Mike and Perimeter together then instigated the Institute for Quantum Computing, where experimenters build on the theory to tinker in areas of quantum technologies.
And most recently Mike helped give rise to the new Quantum Nano Centre which is aimed at prototyping new materials that can become the building blocks of the future. Mike is the recipient of many awards. He was named Canada's Nation Builder of the Year in 2002 by the readers of The Globe and Mail. And he was honoured in this country as an officer of the Order of Canada and a member of the Order of Ontario.
It's a great pleasure for me to welcome this game changing thinker to share a few minutes with us, provide some personal perspectives. Let's give a warm welcome to Mike Lazaridis.
Mike Lazaridis: Thank you very much, Bill, for that kind introduction. I'm very honoured to be among all of you here today and by video in the overflow rooms as part of the program that recognises a need to build a global knowledge society with the power of ideas.
Before we get started, I'd like to ask all of you a question. Just think about it. What's your most prized possession? Do you want to know what's mine? Let me give you a hint. It's something I take with me wherever I go. It's something that no one can take away from me, it can't be lost and it can't be stolen. It's something that appreciates every time I use it, and it's helped me build Canada's largest technology company and a global iconic brand employing over 15,000 talented and hard-working employees. Have you guessed it? It's my education!
Robyn Williams: You're listening to The Science Show on RN, Mike Lazaridis of RIM industries based in Waterloo, Canada, talking at the AAAS in Vancouver.
Mike Lazaridis: All right, I have a confession to make. Not everyone would understand, but I think those in this room would. When I was a kid I couldn't get enough of electronics and building things. I spent a lot of time in my family's basement and school workshops and libraries. I liked everything about those times, but what I was most fascinated with was the way mathematics and reason could so accurately explain nature, that equations could be used to build stuff that worked, even when I couldn't actually see the things that they're made of, like the electrons and atoms involved. I learned how electrical circuits worked from those books and teachers and got an intuitive feel for how those electrical circuits worked in that basement and school workshops and labs. That's how I got my first taste of an inventor's life, experimenting with electronics. There's nothing like seeing the power of ideas when they are plied in the real world.
In high school I got lucky. I was taking the various honours, maths and science courses that I needed for university, but I didn't want to give up on the spirit of building things, so I also talk shop. We had a phenomenal electronics and shop program at my high school. There were two electronic shops, two auto shops, two machine shops, an architectural shop, a mechanical drafting room, and a woodworking shop.
As it happened, our high school has just received a donation of state-of-the-art electronics equipment from a local industrialist. It had just arrived and was still in boxes. Of course I begged to open those boxes and get the equipment out. Mr Micsinszki, my electronic shop teacher, told me I could, on one condition: I had to read the manuals first and prove to him that I understood them.
That may not sound like a big deal, but for a 10th grader it was pretty challenging. The manual for the dual-channel oscilloscope, that was a very tricky one. I spend every hour I could in that shop while trying to keep up my grades in my other classes. It was hard work. I'll always be grateful that Mr Micsinszki made me do it.
By summer I had opened the box of every piece of equipment in that laboratory. It was one of the defining experiences of my life. The manuals and equipment showed me the connection between the abstract math and science concepts I was learning upstairs, and the devices could touch and do cool things with downstairs. You see, this was just at the beginning of the separation between honours students and shop students. There was already an upstairs/downstairs mentality to fight against. Those of us with a foot in both programs tried to explain to our teachers in the maths and computer science classrooms what we were learning downstairs in the shops. I wound up giving lectures to the senior maths classes, showing them how trigonometry that we were learning upstairs could be applied to power generation, power control and power transformation.
Mr Micsinszki was also the president of the local amateur TV and radio club. He had us taking apart old televisions and radios to convert their tuners to the amateur band. I wasn't as keen on that. I was busy by then with my friend Doug Fregin learning about the fundamentals of computers, how to build logic gates, how to build memory arrays, how to build registers, how to wire them all into micro sequencers. Stuff, by the way, that came in really handy later on.
The other thing that Mr Micsinszki taught me was to be careful not to limit myself to only computers, not to focus too narrowly. What he said was that in the future, electronics, computers and wireless were all going to combine, and that's going to be the next big thing. Imagine that. Mr Micsinszki was a visionary, and his words would inspire Doug and me long into our careers.
In 1980 at the University of Waterloo I studied electrical engineering and computer science. We were exposed to cutting-edge research and courses in RF, microprocessors, networking, real-time operating systems, compilers, graphics, semiconductor design, and digital signal processing. As it turned out, all the tools I would need to lead the smart phone revolution years later.
While studying engineering, I had the good fortune to take a course on relativity and quantum mechanics in my second year. The professor, Lynn Watt, was an amazing guy. He had studied with Enrico Fermi. He told us, 'I have to teach you this course for your accreditation, and I will, but once a week in the evenings I'm going to hold evening seminars on the latest discoveries, although you don't have to come.' Those optional night classes were packed. We had intense discussions and arguments late into the night. It was the beginning of my own deep fascination with relativity and quantum mechanics.
By day we learned about how exploring seemingly useless questions, like why things change colour when they heat up, had led to quantum mechanics, and from there to transistors and semiconductors. We learned about the Schrödinger equation and how amazingly accurate it was as a calculational tool, becoming the basis of modern electronics.
In the night seminars we learned about the foundations of quantum mechanics, that quantum mechanics also implied that bizarre things could happen. Like in some situations, pairs of tiny quantum particles, like photons, could be entangled with each other, and once they were, no matter how far apart they were separated later, they would react in correlated and instantaneous ways with each other. For decades physicists had been arguing about this. Was it really true? Einstein thought it wasn't. Others were sure that it was.
Amazingly, during the time I was going to those night lectures, we found out that it was true. A physicist named Alain Aspect had just performed experiments with photons that showed entanglement was real. Einstein was wrong! It blew our minds. As engineers we all wondered and argued way into the night. How the heck is this stuff going to be useful? Where exactly was the value in it? We didn't know. No one knew. But to me it was upstairs/downstairs all over again. It was the honours courses and the shop, only this time it was the other way around. It was easy to think that something as esoteric as quantum entanglement couldn't possibly be useful to the real-world stuff we engineers wanted to build and do. But maybe, just maybe, reading about quantum entanglement was like reading a manual that had fallen through from the future. If we were bold, we just might have a part in the coming quantum information age.
Through a lot of inventing, building and hard work, I was able to launch a company that ultimately put me in a personal situation to give back to my community, to Canada, and to science and education globally. I wanted to do something that would make a better future. What could I do that would get to the heart of value creation, that could fuel opportunities for the kinds of revolutionary creativity, discovery and technological innovation that we've seen in the past.
Remember that glimpse of a manual from the future? I called up my old Professor Lynn Watt and got together some other very bright people and formed an institute devoted to tackling some of the toughest problems in foundational theoretical physics. With the support of the federal and provincial governments which provide matching funds, the goodwill of local and regional governments, and a growing list of private partners, we created Perimeter Institute for Theoretical Physics. It was anchored around two pillars of 20th-century physics; relativity and quantum mechanics.
Two years later we formed the Institute for Quantum Computing at the University of Waterloo as an experimental centre devoted to the emerging science of quantum information. And this fall we'll open the new Quantum Nano Centre. Perimeter and IQC are already considered leading institutions worldwide.
I've thought a lot about what has value, what has shaped the present, and what can and will shape the future. It's a paradox. Ours is a technological age. Technology has changed almost every aspect of the way we live, what we eat, what we do when we are sick, how we get around, what we do for fun, and especially how we communicate with each other. We're surrounded by devices which are so sleek and powerful that we're tempted to think that it's the little machines themselves that are valuable. But the devices are just ideas made into a form where we can hold them in our hands. It's the ideas themselves that got us this far, and it's new ideas that will get us even further.
So maybe a thought experiment will help me explain this. Say someone invented a way to do time travel and I sent my BlackBerry back through time. The BlackBerry is one manifestation of the discovery made by Faraday in the 1860s, that electricity and magnetism were two aspects of the same thing. James Clerk Maxwell formulated the equations that led to the prediction that electromagnetic waves could travel wirelessly through space at the speed of light.
Today the BlackBerry is a technological marvel with six antennas and wireless transceivers, a massively complex 28 nanometre system on a chip with billions of transistors, gigabytes of memory, lithium rechargeable battery, audio speakers and microphone, HD camera and video, high resolution colour display with touchscreen, and state-of-the-art miniature thumb qwerty keyboard, along with over 30 million lines of software code.
So suppose I sent my BlackBerry back to Maxwell's time. Maxwell was not just a great mathematical physicist, he was a great experimentalist. As a child he stole gelatine from the kitchen so he could make up his own clear cylinders and arches and then twist them to see the mechanical stresses. As a young scientist he built spinning tops with different colours on the edges so that he could measure how colours combined in the eyes, knowledge he used to take the first colour photograph.
He also built models of Saturn's rings that convinced his peers, over 100 years before the Voyager fly-by, that the rings were not solid disks or vapour but rather they were made of tiny particles. And all this before his masterwork, the astoundingly difficult mathematical leap of combining electricity and magnetism formally.
So Maxwell knew a thing or two about taking things apart and putting them back together. But I think it's fair to say he'd have no idea what to make of the BlackBerry. He could devote all his life and the entire resources of 19th century Scotland to such an artefact and make no progress in understanding it. After all, he'd have to invent all of quantum mechanics to understand a single transistor, and the BlackBerry contains billions of them. Not to mention the fact he'd probably have to invent special screwdrivers just to open it up.
What if I sent this high school textbook back in time? The fundamental discoveries of the past 100 years explained in this book would dramatically change the course of history should Maxwell have read it. That's the power of education and ideas.
Let's think about value in another way. Here again we imagine going back in time to the world a little over 100 years ago. Everything was barrelling along. The last few generations had brought huge advances; the telegraph, steam power, transcontinental railways. Living standards had skyrocketed. Cities were growing by leaps and bounds, they were the engines of growth and prosperity. Yet there were huge challenges too; engineering challenges, diplomatic challenges, environmental challenges.
In 1898, delegates from all over the world gathered in New York for the world's first urban planning conference. One overwhelming problem dominated. It wasn't infrastructure, it wasn't poverty, it wasn't housing, or even economic development. Are you ready for this? It was horse manure. In cities all across the land, it was horses that move the goods, and people that were building up the cities as fast as possible. Everyone was trying to figure out ways to get more horses, faster horses, more carriages, and we had to find ways to clean up the manure in the streets because the conditions were just deplorable. Vacant lots were piled high with it. The Times predicted that in 1950 every street in London would be buried 9 feet deep in horse manure. There were flies, disease, it was an environmental and public health crisis.
So imagine this story. You're part of a granting council, you've been tasked with driving the economy, really building commerce and commercialising technology and doing important things to the country. And so of course back then, what are you thinking about? Well, you're thinking we need more horses, we need better ways to clean up the streets, we need to figure out ways to build better carriages.
Now this physicist comes into the room and sits down, and that same granting council asks, 'Dr Einstein, why are you here?' And he says, 'I'd like to have an office and a stipend.' 'For what?' So he explains, 'Well, I need a desk and a blackboard and maybe a shelf for my books and my papers, and I need a small stipend so I can go to a few scientific conferences around the world and have a few postdoctoral researchers.' They ask, 'Why?' And he says, 'Well, I have these ideas about light, it's very complicated but light can...' 'Whoa, whoa,' said the council members, 'excuse me Professor, but what has this got to do with horses?' Nothing. It had nothing to do with horses. What did Einstein's ideas lead to? Well, apart from a Nobel Prize, they led to nuclear energy, semiconductors, computers, lasers, GPS, medical imaging, DVD, and we can go on for hours.
So now let's fast forward to today. We are in crisis. We're running out of energy any way you slice it, and the energy sources that we have today are damaging our environment, perhaps irreparably. In my industry we're running up against the limits of Moore's Law, of how many transistors can be crammed on a single computer chip. Everything we built our telecommunications industry and information age upon is going to hit this limit as we approach the geometry of an atom.
At the same time we have this enormous need for value creation. We need to tackle these problems fearlessly, head-on of course. But we also need to remember that gentleman thinking about light and realise that while we are tackling these things head-on, we also need to support our scientists, our researchers and our students whose work may seem to have nothing to do with these problems, whose work indeed seems to have no practical application today. We not only need to fund them imaginatively, we need to have faith that what they're doing is going to be very important, vitally important 20, 30, 40 and 50 years from now, and that we haven't got a chance of understanding its relevance today.
As we develop science policy we need to look beyond the short-term context, beyond the research that looks immediately promising. If we're blinded by the urgency of our problems, we will go the wrong way, we'll be investing in horses, carriages and cleaning up the streets instead of fostering the research that can give rise to an idea or technology that is going to change the world.
We need to commit not only to good science but great science, breakthrough science. Ultimately that means supporting the best from all over the planet to go in directions that their curiosity leads them and to pursue their most ambitious ideas.
It's inspiring to go to the Perimeter Institute cafeteria and simply sit and listen, not to the science, which I only begin to understand, but simply to the languages. You'd be standing in line for coffee and you're likely to hear one language in front of you and a different one behind you. It's amazing and inspiring to me that science can join all these people together in pursuit of these really big ideas.
Science, I believe, is the first successful global democracy. Scientists govern themselves through a (mostly) peaceful system of peer-review and ultimately through scientific experimental verification. They are bound by their allegiance to reason and curiosity, committed to looking honestly at evidence. They share a common language; mathematics. They unite across differences of age and gender and race, across all the cultures and geographies of the world, to solve problems, some of the most difficult problems.
Many of you are educators. I salute you for that. Some of the most important people in my life were teachers. As we contemplate changes to public education we need to keep in mind that nothing really beats good, creative, hands-on teaching and learning. There is no substitute for the magic that occurs in schools between teachers and pupils. If there were, then our kids could just teach themselves directly from the internet. Why learn something when you can just look it up on Wikipedia? But of course it doesn't work that way.
Yeats said that education is not the filling of a pail, it is the lighting of a fire. Teachers, they are the spark. Education is a form of time travel. We are sending our kids forward in time, to a future that doesn't exist yet. They'll hold jobs we've never heard of, in industries yet to be invented. What can they take with them? I think it must be the very biggest of ideas, the very broadest of skills, a hard-won physical intuition for the way the world works, a handful of really big ideas and the space to tinker with them, great problem-solving skills, a willingness to take risks, to fail, over and over if necessary, until they succeed. A great education.
These intuitions and big ideas, the things that really stick with you, they're not developed by setting tests or cramming information, they're developed by giving kids big ideas, they're developed by bridging upstairs and downstairs. Why do I still know trigonometry? Because I wanted so badly to open the boxes in Mr Micsinszki's lab, because I saw for myself how beautiful and useful it was. That's what we should be helping our kids discover. We have to let kids fumble a little, we have to let them mess around. In the same way that no high school student knows Shakespeare like the ones putting on the play, no high school student knows Newton like the ones launching home-made rockets.
And what about the students when they get out of high school? What about universities? What about researchers? Some question the value of investments in pure research. They point out that in times like these we need economic development. If we're putting our money into anything, it should be on those activities that we can reliably predict will make us money in the short term. They're wrong, that's the role of industry.
Yes, we need research and development, but we also need the type of blue sky research that will lead to developments we can't even imagine. We need research that tackles the really big questions.
We should remember JJ Thomson who was criticised for his useless research. In 1934 in the teeth of the Great Depression, when people were calling for a halt to research to free up money for jobs, Thomson reminded people that his discovery of the electron had turned out to be pretty useful after all. He said, 'Any new discovery contains the germ of a new industry.'
In business we understand that risk and reward go hand in hand. We celebrate the risktakers, the ones who stake it all on something nobody else sees. We need to do the same in science. I have friends who are venture capitalists and they say that only one in 10 investments really pay off on average. So it is with scientific research. The truly revolutionary stuff, that's being done by trailblazers. What we have to remember is that trailblazers sometimes get lost. That's just what happens in unexplored territory. Research hits dead ends, promising avenues dry up, models collapse, people are just plain wrong.
Over 90% of the time on the venture capitalist model you'd be losing your bet. But a few percent of the time you'd be making breakthroughs, because that's the other thing trailblazers do, they discover things that are utterly new. We need a system for scientific research that allows researchers to get lost exploring, maybe even encourage them to get lost exploring, because you know what? It's worth it. It's the path to breakthroughs. That is the kind of science that will give us the next generation of truly fundamental breakthroughs, things on the order of Maxwell's unification of electricity and magnetism, or Einstein's notion of space-time, or quantum mechanics.
I'm talking about physics because physics is my passion, but of course we need breakthroughs in every major scientific area. And the impact of breakthroughs? History has taught us that it's impossible to say, even the discoverers can't say it. Brattain, Shockley and Bardeen came up with the transistor while trying to figure out how quantum mechanics worked in solids. They figured their new invention would probably be important to, say, the hearing aid industry. They had no idea what their discoveries would mean to the world.
At this conference we'll have the chance to hear about some of the biggest ideas in science. Some will dazzle us with their possibilities, and some I dearly hope will seem to have no application at all. And those are the ideas that I think we should be really excited about. If I were to guess what the impact of those ideas would be...well, history tells me I'd get it wrong.
Think back to the dawn of the 20th century, to the birth of quantum mechanics and relativity. No one had any idea how these new theories would change the world, no one could foresee it; the laser, the semiconductor, the computer, the internet, the medical images, the satellites in orbit, the BlackBerry. They were unimaginable.
So if you try to imagine what the impact of the current generation of breakthroughs will be, well, you're going to be wrong. Not only that, you're going to be wrong in a way that will make you look unbelievably conservative.
Ladies and gentlemen, if I could say just one thing to you today, it would be this: times are hard, and in such times it's tempting to cut back. But we can't afford to cut back. We have to have the courage and think big and be bold. You, we, in our own ways all have a role to play. Whether you contribute to science and education at a local, regional, national or international level, you have a role to play and a part of this great global enterprise. I know and respect that. And what's more, all of us in this room need to let our young people muck about in basements and high school shops and night seminars, we need to let them blow up model rockets and explore the strangest and apparently most useless of ideas.
And when they explore their ideas for so long that they grow up to be scientists, we need to continue to give them the freedom and support to push on into the strange, the apparently useless, the utterly new. We cannot be so blinded by the urgency of our problems that we take for granted how important, how powerful the combination of curiosity and reason really is. That is the tradition of science.
And don't ever forget what history has proven time and time again. Although history tells us that the impact of scientific breakthroughs can't be predicted, it also says that impact will come. We can be quite sure of that. Make a breakthrough, an impact will come. And the impact will improve everything. Thank you.
Robyn Williams: You've been listening to Mike Lazaridis at the American Association for the Advancement of Science meeting in Vancouver. As the man behind the BlackBerry he has fostered basic research in physics at the Perimeter Institute, which we featured in The Science Show in April 2010.
Mike Lazaridis
Vice-Chair, RIM Board of Directors
Founder and Board Chair, Perimeter Institute for Theoretical Physics
Waterloo Ontario Canada
Robyn Williams
David Fisher
Comments (12)
Add your comment
• Gregory Jarosch :
09 Jun 2012 6:00:17pm
Mike Lazaridis - The power of ideas, 12.22 pm Saturday 9 June 2012.
Bravo Mike. What an inspirational presentation with powerful delivery and insightful thought provoking pronouncements.
This is the stuff Australian society needs in buckets to rouse it from its intellectual and "Iam all right jack" malaise. This is the stuff that should have us all challenging our, minds and bodies to come up with better more meaningful ways of living.
Mike should make this presentation into a documentary, movie.
• Marion Murphy :
10 Jun 2012 8:55:11pm
Lazaridis' call for public investment in 'blue sky' research supports the proposal by Andrew Charlton (Quarterly Essay, 2011)for governments to invest heavily to make a break through in carbon emission reduction. Charlton notes the private sector does not have a history of investing in 'break through' research due to the uncertainty of a commercial benefit. A price on carbon will not attract private sector investment in break through research, and will weaken the economic base required for governments to fund research at the scale appropriate to the urgency of the climate change threat.
• Trevor :
10 Jun 2012 10:11:35pm
"He sites quantum mechanics and relativity as theories which would change the world ..."
But what sites and where?
• robbie :
11 Jun 2012 3:04:28pm
I agree Greg. What an inspiring presentation Mike gave. If only more money could be allocated to science education and educators!! As he said, quality teaching is what ignites the spark.
• Andrew Thomas :
12 Jun 2012 12:53:30pm
A very apt presentation for our times I think. Economic conservatism, particularly as applied to Research, ironically, holds real potential for economic suicide.
The private sector has little to no history of coming up with new game changing technologies that leads to true economic growth because the risks are too high. Whether it be computers, the internet, the combustion engine, etc., all have their roots in Government, Universities or occasionally private citizens. Governments need to set aside funds for high end / high risk research through Universities and research institutes. Further, the application of Anglo-American business models to Universities must end, or at least be wound back. Otherwise, stagnation is the likely outcome, and stagnation and economic growth are mutually exclusive.
• Fabio :
12 Jun 2012 1:57:30pm
An inspirational speech, full of passion, insight and humour. Who could forget the anecdote about the worlds first urban planning conference back in 1898, when the topic of most concern was: horse manure! A classic example of how disruptive technology (in this case, the motor car) can have totally unpredictable effects on society; destroying entire industries (eg: horse feed, manure collectors) while creating totally new ones (oil extraction, motor mechanics). We may laugh now and think it was all so obvious, but back then who was bold enough to predict that Kettering's design for an electric ignition system and Ford's ideas of mass production were the key technologies that would destroy huge established industries and create new ones.
His message is clear. We must always invest in basic scientific research, for no-one can predict where the next disruptive technological breakthrough will come from, or what effects the application of that technology will have. We must instil our children with a sense of wonder and curiosity about science, and we must reward those that we have entrusted with our children's future - their teachers.
Yes society does need accountants, lawyers, economists, shop-keepers and real-estate developers, but it must NOT be controlled by them. These professions do not look to the future, instead they actually rely on predictable outcomes, ie: "business as usual", they look to "manage risk" rather than identify the next disruptive technology. Yet these professions make up the majority of our politicians here in Australia, and I detect many of Australias politicians of today of all persuasions actively deride or ignore science, and remain oblivious to its importance.
We must NOT listen to these professions when they say to reduce spending on education and scientific research. Even as the first of Henry Ford's motor cars rolled off the assembly line, it would have been the accounts and lawyers of the day that would have been busy planning for the next upgrade of horse manure cleaning infrastructure.
• Jim Fitzgerald :
12 Jun 2012 1:57:49pm
Very much agree with all Gregory, Marion and Robbie say. Education is vital, as Mike so ably explains, but more importantly we must teach our children how to think, not what to think. And teach them how to teach their children the same way.
• Gerhard Weissmann :
15 Jun 2012 5:25:01pm
It is always the two separate parts of mathematics:
The physical mathematics which take the world as a whole and has its time-motion as a part of it and
the economic half-truth calculations which leaves time out of its recognition.
The latter part is what Andrew Thomas says is the part of Governments and business. It is not true mathematics, it is incomplete and insufficient. It leads to the overpopulation under which the world suffers.
What I call the physical mathematics is true mathematics which are used by professionals and take account of the time. It is irreversible.
• everitt :
28 Jun 2012 9:10:24pm
I have an idea, but find it extremely difficult to contact people who may be interested--like Mike Lazarides, for example.. I should emphasise that this is not a commercial venture soliciting support or the usual tawdry gimmick seeking reward: it is, in the first place, an experiment deserving investigation, and at once an Invention requiring proper development..
The following is a letter which I have attempted to send to various destinations containing a blog-address at which this idea--for a 'continuous induction turbine'--is fully and freely presented for any who are interested in pursuing its potential..
To whom it may concern...
I am writing in order to bring to your attention the following 'blog address' at which you will find plans, specifications and a lengthy text with pictures and diagrams on the subject of a novel form of Alternating Current Electricity and the mechanism of its Generation.. The intention of this presentation generally is simply to invite anyone suitably inspired to construct this device, a modest and inexpensive undertaking of itself--and quite straightforward--, but one which nevertheless requires a certain electrical engineering know-how, if not expertise (the device is essentially an electrical generator), and more especially, a degree of dedication to the task, since despite my best efforts over a lengthy period to present the design and general hypothesis in its correct entirety, certain details may need ironing out, so to speak..
My hope in addressing it to you is simply that you may wish to investigate the idea further, perhaps in the first place by subjecting it for example, if such is indeed possible, to the rigours of computer simulation..
I should add that what is termed a 'theory of cohesive space', a rudimentary unifying theory I suppose, is specifically alluded to in the text, and while it is not critical to the argument prima facie, a successful outcome will likely lend weight to certain of its perspectives, more particularly those concerning the issues of electricity and magnetism pertinent to the operation of the experimental apparatus, a useful subsidiary product of the endeavour at the very least..
Also, while the primary and ostensible property of this apparatus--the Continuous Induction Turbine or Generator-- is, it is argued, to enhance significantly the efficiency of conduction of Alternating Current--by the inference of the argument, the efficiency of electro-magnetic induction itself (through the elaboration of a new wave-form of voltage and current)--, its eventual principal purpose may likely prove beyond that meagre basic objective, as you may read.. And while it is by no means certain either that this turbine will function as suggested or even at all, what it purports to propose as possible and the sense of its method seem to me well worth investigation, particularly since this wou
• everitt :
02 Jul 2012 12:37:03am
.....particularly since this would, in the proper hands, be relatively cheap and easy to do: the rewards of its success are almost unimaginable..
Thank you most sincerely for any attention you are prepared to devote to this idea and the exercise of its realisation: and I hope to hear something from you soon...(Included below is the first section of the text, presenting this essential idea and the general argument of its basis)...james everitt..
So you see, perhaps, the diffculties faced...The blog-site address at which a potentially marvelous new idea may be found is:
• Marcus Morgan :
14 Jul 2012 4:13:29pm
I am not as optimistic as the speaker, but a pep talk on engineering such as this would seem productive. The issue is whether engineering holds answers to our future other than for further speed and inticacy to things that we are already doing.
It's possible engineering will create something that makes us do something other than continue our present lifestyle of satisfying basic needs and a desire for luxury and status, but that remains to be seen.
Our problems of overpopulation, 3 billion people currently in deprivation, racial and religious ignorance and intolerance, and so on, are human issues, not engineering ones.
As I say, an engineer might come up with something out of the box technologically to somehow change our social ways, so we shall see. They might slow our decline by saving resources, but their trend currently is still to do otherwise globally. |
5a40bbb8844cdca0 | Quantum theory is unsettling. Nobel laureate Richard Feynman admitted that it “appears peculiar and mysterious to everyone-both to the novice and to the experienced physicist.” Niels Bohr, one of its founders, told a young colleague, “If it does not boggle your mind, you understand nothing.” Physicists have been quarreling over its interpretation since the legendary arguments between Bohr and Einstein in the 1920s. So have philosophers, who agree that it has profound implications but cannot agree on what they are. Even the man on the street has heard strange rumors about the Heisenberg Uncertainty Principle, of reality changing when we try to observe it, and of paradoxes where cats are neither alive nor dead till someone looks at them.
Quantum strangeness, as it is sometimes called, has been a boon to New Age quackery. Books such as The Tao of Physics (1975) and The Dancing Wu Li Masters (1979) popularized the idea that quantum theory has something to do with eastern mysticism. These books seem almost sober today when we hear of “quantum telepathy,” “quantum ESP,” and, more recently, “quantum healing,” a fad spawned by Deepak Chopra’s 1990 book of that name. There is a flood of such quantum flapdoodle (as the physicist Murray Gell-Mann called it). What, if anything, does it all mean? Amid all the flapdoodle, what are the serious philosophical ideas? And what of the many authors who claim that quantum theory has implications favorable to religious belief? Are they on to something, or have they been taken in by fuzzy thinking and New Age nonsense?
It all began with a puzzle called wave-particle duality. This puzzle first appeared in the study of light. Light was understood by the end of the nineteenth century to consist of waves in the electromagnetic field that fills all of space. The idea of fields goes back to Michael Faraday, who thought of magnetic and electrical forces as being caused by invisible “lines of force” stretching between objects. He envisioned space as being permeated by such force fields. In 1864, James Clerk Maxwell wrote down the complete set of equations that govern electromagnetic fields and showed that waves propagate in them, just as sound waves propagate in air.
This understanding of light is correct, but it turned out there was more to the story. Strange things began to turn up. In 1900, Max Planck found that a certain theoretical conundrum could be resolved only by assuming that the energy in light waves comes in discrete, indivisible chunks, which he called quanta. In other words, light acts in some ways like it is made up of little particles. Planck’s idea seemed absurd, for a wave is something spread out and continuous, while a particle is something pointlike and discrete. How can something be both one and the other?
And yet, in 1905, Einstein found that Planck’s idea was needed to explain another puzzling behavior of light, called the photoelectric effect. These developments led Louis de Broglie to make an inspired guess: If waves (such as light) can act like particles, then perhaps particles (such as electrons) can act like waves. And, indeed, this proved to be the case. It took a generation of brilliant physicists (including Bohr, Heisenberg, Schrödinger, Born, Dirac, and Pauli) to develop a mathematically consistent and coherent theory that described and made some sense out of wave-particle duality. Their quantum theory has been spectacularly successful. It has been applied to a vast range of phenomena, and hundreds of thousands of its predictions about all sorts of physical systems have been confirmed with astonishing accuracy.
Great theoretical advances in physics typically result in profound unifications of our understanding of nature. Newton’s theories gave a unified account of celestial and terrestrial phenomena; Maxwell’s equations unified electricity, magnetism, and optics; and the theory of relativity unified space and time. Among the many beautiful things quantum theory has given us is a unification of particles and forces. Faraday saw that forces arise from fields, and Maxwell saw that fields give rise to waves. Thus, when quantum theory showed that waves are particles (and particles waves), a deep unity of nature came into view: The forces by which matter interacts and the particles of which it is composed are both manifestations of a single kind of thing-“quantum fields.”
The puzzle of how the same thing can be both a wave and a particle remains, however. Feynman called it “the only real mystery” in science. And he noted that, while we “can tell how it works,” we “cannot make the mystery go away by ‘explaining’ how it works.” Quantum theory has a precise mathematical formalism, one on which everyone agrees and that tells how to calculate right answers to the questions physicists ask. But what really is going on remains obscure-which is why quantum theory has engendered unending debates over the nature of physical reality for the past eighty years.
The problem is this: At first glance, wave-particle duality is not only mysterious but inconsistent in a blatant way. The inconsistency can be understood with a thought experiment. Imagine a burst of light from which a light wave ripples out through an ever-widening sphere in space. As the wave travels, it gets more attenuated, since the energy in it is getting spread over a wider and wider area. (That is why the farther you are from a light bulb, the fainter it appears.) Now, suppose a light-collecting device is set up, a box with a shutter-essentially, a camera. The farther away it is placed from the light burst, the less light it will collect. Suppose the light-collecting box is set up at a distance where it will collect exactly a thousandth of the light emitted in the burst. The inconsistency arises if the original burst contained, say, fifty particles of light. For then it appears that the light-collector must have collected 0.05 particles (a thousandth of fifty), which is impossible, since particles of light are indivisible. A wave, being continuous, can be infinitely attenuated or subdivided, whereas a particle cannot.
Quantum theory resolves this by saying that the light-collector, rather than collecting 0.05 particles, has a 0.05 probability of collecting one particle. More precisely, the average number of particles it will collect, if the same experiment is repeated many times, is 0.05. Wave-particle duality, which gave rise to quantum theory in the first place, forces us to accept that quantum physics is inherently probabilistic. Roughly speaking, in pre-quantum, classical physics, one calculated what actually happens, while in quantum physics one calculates the relative probabilities of various things happening.
This hardly resolves the mystery. The probabilistic nature of quantum theory leads to many strange conclusions. A famous example comes from varying the experiment a little. Suppose an opaque wall with two windows is placed between the light-collector and the initial burst of light. Some of the light wave will crash into the wall, and some will pass through the windows, blending together and impinging on the light-collector. If the light-collector collects a particle of light, one might imagine that the particle had to have come through either one window or the other. The rules of the quantum probability calculus, however, compel the weird conclusion that in some unimaginable way the single particle came through both windows at once. Waves, being spread out, can go through two windows at once, and so the wave-particle duality ends up implying that individual particles can also.
Things get even stranger, and it is clear why some people pine for the good old days when waves were waves and particles were particles. One of those people was Albert Einstein. He detested the idea that a fundamental theory should yield only probabilities. “God does not play dice!” he insisted. In Einstein’s view, the need for probabilities simply showed that the theory was incomplete. History supported his claim, for in classical physics the use of probabilities always stemmed from incomplete information. For example, if one says that there is a 60 percent chance of a baseball hitting a glass window, it is only because one doesn’t know the ball’s direction and speed well enough. If one knew them better (and also knew the wind velocity and all other relevant variables), one could definitely say whether the ball would hit the window. For Einstein, the probabilities in quantum theory meant only that there were as-yet-unknown variables: hidden variables, as they are called. If these were known, then in principle everything could be predicted exactly, as in classical physics.
Many years have gone by, and there is still no hint from any experiment of hidden variables that would eliminate the need for probabilities. In fact, the famed Heisenberg Uncertainty Principle says that probabilities are ineradicable from physics. The thought experiment of the light burst and light-collector showed why: If one and the same entity is to behave as both a wave and a particle, then an understanding in terms of probabilities is absolutely required. (For, again, 0.05 of a particle makes no sense, whereas a 0.05 chance of a particle does.) The Uncertainty Principle, the bedrock of quantum theory, implies that even if one had all the information there is to be had about a physical system, its future behavior cannot be predicted exactly, only probabilistically.
This last statement, if true, is of tremendous philosophical and theological importance. It would spell the doom of determinism, which for so long had appeared to spell the doom of free will. Classical physics was strictly deterministic, so that (as Laplace famously said) if the state of the physical world were completely specified at one instant, its whole future development would be exactly and uniquely determined. Whether a man lifts his arm or nods his head now would (in a world governed by classical physical laws) be an inevitable consequence of the state of the world a billion years ago.
But the death of determinism is not the only deep conclusion that follows from the probabilistic nature of quantum theory. An even deeper conclusion that some have drawn is that materialism, as applied to the human mind, is wrong. Eugene Wigner, a Nobel laureate, argued in a famous essay that philosophical materialism is not “logically consistent with present quantum mechanics.” And Sir Rudolf Peierls, another leading physicist, maintained that “the premise that you can describe in terms of physics the whole function of a human being . . . including its knowledge, and its consciousness, is untenable.”
These are startling claims. Why should a mere theory of matter imply anything about the mind? The train of logic that leads to this conclusion is rather straightforward, if a bit subtle, and can be grasped without knowing any abstruse mathematics or physics.
It starts with the fact that for any physical system, however simple or complex, there is a master equation-called the Schrödinger equation-that describes its behavior. And the crucial point on which everything hinges is that the Schrödinger equation yields only probabilities. (Only in special cases are these exactly 0, or 100 percent.) But this immediately leads to a difficulty: There cannot always remain just probabilities; eventually there must be definite outcomes, for probabilities must be the probabilities of definite outcomes. To say, for example, there is a 60 percent chance that Jane will pass the French exam is meaningless unless at some point there is going to be a French exam on which Jane will receive a definite grade. Any mere probability must eventually stop being a mere probability and become a certainty or it has no meaning even as a probability. In quantum theory, the point at which this happens, the moment of truth, so to speak, is traditionally called the collapse of the wave function.
The big question is when this occurs. Consider the thought experiment again, where there was a 5 percent chance of the box collecting one particle and a 95 percent chance of it collecting none. When does the definite outcome occur in this case? One can imagine putting a mechanism in the box that registers when a particle of light has been collected by making, say, a red indicator light to go on. The answer would then seem plain: The definite outcome happens when the red light goes on (or fails to do so). But this does not really produce a definite outcome, for a simple reason: Any mechanism one puts into the light-collecting box is just itself a physical system and is therefore described by a Schrödinger equation. And that equation yields only probabilities . In particular, it would say there is a 5 percent chance that the box collected a particle and that the red indicator light is on, and a 95 percent chance that it did not collect a particle and that the indicator light is off. No definite outcome has occurred. Both possibilities remain in play.
This is a deep dilemma. A probability must eventually get resolved into a definite outcome if it is to have any meaning at all, and yet the equations of quantum theory when applied to any physical system yield only probabilities and not definite outcomes.
Of course, it seems that when a person looks at the red light and comes to the knowledge that it is on or off, the probabilities do give way to a definite outcome, for the person knows the truth of the matter and can affirm it with certainty. And this leads to the remarkable conclusion of this long train of logic: As long as only physical structures and mechanisms are involved, however complex, their behavior is described by equations that yield only probabilities-and once a mind is involved that can make a rational judgment of fact, and thus come to knowledge, there is certainty. Therefore, such a mind cannot be just a physical structure or mechanism completely describable by the equations of physics.
Has there been a sleight-of-hand? How did mind suddenly get into the picture? It goes back to probabilities. A probability is a measure of someone’s state of knowledge or lack of it. Since quantum theory is probabilistic, it makes essential reference to someone’s state of knowledge. That someone is traditionally called the observer. As Peierls explained, “The quantum mechanical description is in terms of knowledge, and knowledge requires somebody who knows.”
I have been explaining some of the implications (as Wigner, Peierls, and others saw them) of what is usually called the traditional, Copenhagen, or standard interpretation of quantum theory. The term “Copenhagen interpretation” is unfortunate, since it carries with it the baggage of Niels Bohr’s philosophical views, which were at best vague and at worst incoherent. One can accept the essential outlines of the traditional interpretation (first clearly delineated by the great mathematician John von Neumann) without endorsing every opinion of Bohr.
There are many people who do not take seriously the traditional interpretation of quantum theory-precisely because it gives too great an importance to the mind of the human observer. Many arguments have been advanced to show its absurdity, the most famous being the Schrödinger Cat Paradox. In this paradox one imagines that the mechanism in the light-collecting box kills a cat rather than merely making a red light go on. If, as the traditional view has it, there is not a definite outcome until the human observer knows the result, then it would seem that the cat remains in some kind of limbo, not alive or dead, but 95 percent alive and 5 percent dead, until the observer opens the box and looks at the cat-which is absurd. It would mean that our minds create reality or that reality is perhaps only in our minds. Many philosophers attack the traditional interpretation of quantum theory as denying objective reality. Others attack it because they don’t like the idea that minds have something special about them not describable by physics.
The traditional interpretation certainly leads to thorny philosophical questions, but many of the common arguments against it are based on a caricature. Most of its seeming absurdities evaporate if it is recognized that what is calculated in quantum theory’s wavefunction is not to be identified simply with what is happening, has happened, or will happen but rather with what someone is in a position to assert about what is happening, has happened, or will happen. Again, it is about someone’s (the observer’s) knowledge . Before the observer opens the box and looks at the cat, he is not in a position to assert definitely whether the cat is alive or dead; afterward, he is-but the traditional interpretation does not imply that the cat is in some weird limbo until the observer looks. On the contrary, when the observer checks the cat’s condition, his observation can include all the tests of forensic pathology that would allow him to pin down the time of the cat’s death and say, for instance, that it occurred thirty minutes before he opened the box. This is entirely consistent with the traditional interpretation of quantum theory. Another observer who checked the cat at a different time would have a different “moment of truth” (so the wavefunction that expresses his state of knowledge would collapse when he looked), but he would deduce the same time of death for the cat. There is nothing subjective here about the cat’s death or when it occurred.
The traditional interpretation implies that just knowing A, B, and C, and applying the laws of quantum theory, does not always answer (except probabilistically) whether D is true. Finding out definitely about D may require another observation. The supposedly absurd role of the observer is really just a concomitant of the failure of determinism.
The trend of opinion among physicists and philosophers who think about such things is away from the old Copenhagen interpretation, which held the field for four decades. There are, however, only a few coherent alternatives. An increasingly popular one is the many-worlds interpretation, based on Hugh Everett’s 1957 paper, which takes the equations of physics as the whole story. If the Schrödinger equation never gives definite and unique outcomes, but leaves all the possibilities in play, then we ought to accept this, rather than invoking mysterious observers with their minds’ moments of truth.
So, for example, if the equations assign the number 0.05 to the situation where a particle has been collected and the red light is on, and the number 0.95 to the situation where no particle has been collected and the red light is off, then we ought to say that both situations are parts of reality (though one part is in some sense larger than the other by the ratio 0.95 to 0.05). And if an observer looks at the red light, then, since he is just part of the physical system and subject to the same equations, there will be a part of reality (0.05 of it) in which he sees the red light on and another part of reality (0.95 of it) in which he sees the red light off. So physical reality splits up into many versions or branches, and each human observer splits up with it. In some branches a man will see that the light is on, in some he will see that the light is off, in others he will be dead, in yet others he will never have been born. According to the many-worlds interpretation, there are an infinite number of branches of reality in which objects (whether particles, cats, or people) have endlessly ramifying alternative histories, all equally real.
Not surprisingly, the many-worlds interpretation is just as controversial as the old Copenhagen interpretation. In the view of some thinkers, the Copenhagen and many-worlds interpretation both make the same fundamental mistake. The whole idea of wave-particle duality was a wrong turn, they say. Probabilities are needed in quantum theory because in no other way can one make sense of the same entity being both a wave and a particle. But there is an alternative, going back to de Broglie, which says they are not the same entity. Waves are waves and particles are particles. The wave guides, or “pilots,” the particles and tells them where to go. The particles surf the wave, so to speak. Consequently, there is no contradiction in saying both that a tiny fraction of the wave enters the light collector and that a whole-number of particles enters-or in saying that the wave went through two windows at once and each particle went through just one.
De Broglie’s pilot-wave idea was developed much further by David Bohm in the 1950s, but it has only recently attracted a significant following. “Bohmian theory” is not just a different interpretation of quantum theory; it is a different theory. Nevertheless, Bohm and his followers have been able to show that many of the successful predictions of quantum theory can be reproduced in theirs. (It is questionable whether all of them can be.) Bohm’s theory can be seen as a realization of Einstein’s idea of hidden variables, and its advocates see it as a vindication of Einstein’s well-known rejection of standard quantum theory. As Einstein would have wanted, Bohmian theory is completely deterministic. Indeed, it is an extremely clever way of turning quantum theory back into a classical and essentially Newtonian theory.
The advocates of this idea believe that it solves all of the quantum riddles and is the only way to preserve philosophical sanity. However, most physicists, though impressed by its cleverness, regard it as highly artificial. In my view, the most serious objection to it is that it undoes one of the great theoretical triumphs in the history of physics: the unification of particles and forces. It gets rid of the mysteriousness of quantum theory by sacrificing much of its beauty.
What, then, are the philosophical and theological implications of quantum theory? The answer depends on which school of thought-Copenhagen, many worlds, or Bohmian-one accepts. Each has its strong points, but each also has features that many experts find implausible or even repugnant.
One can find religious scientists in every camp. Peter E. Hodgson, a well-known nuclear physicist who is Catholic, insists that Bohmian theory is the only metaphysically sound alternative. He is unfazed that it brings back Newtonian determinism and mechanism. Don Page, a well-known theoretical cosmologist who is an evangelical Christian, prefers the many-worlds interpretation. He isn’t bothered by the consequence that each of us has an infinite number of alter egos.
My own opinion is that the traditional Copenhagen interpretation of quantum theory still makes the most sense. In two respects it seems quite congenial to the worldview of the biblical religions: It abolishes physical determinism, and it gives a special ontological status to the mind of the human observer. By the same token, it seems quite uncongenial to eastern mysticism. As the physicist Heinz Pagels noted in his book The Cosmic Code : “Buddhism, with its emphasis on the view that the mind-world distinction is an illusion, is really closer to classical, Newtonian physics and not to quantum theory [as traditionally interpreted], for which the observer-observed distinction is crucial.”
If anything is clear, it is that quantum theory is as mysterious as ever. Whether the future will bring more-compelling interpretations of, or even modifications to, the mathematics of the theory itself, we cannot know. Still, as Eugene Wigner rightly observed, “It will remain remarkable, in whatever way our future concepts develop, that the very study of the external world led to the conclusion that the content of the consciousness is an ultimate reality.” This conclusion is not popular among those who would reduce the human mind to a mere epiphenomenon of matter. And yet matter itself seems to be telling us that its connection to mind is more subtle than is dreamt of in their philosophy.
Articles by Stephen M. Barr |
2aeddbfac58c54ca | Sunday, July 12, 2009
Aether based explanation of dark matter
Before month I listed four explanations of dark matter, which are plural from AWT perspective:
1. consequence of limited light speed spreading through expanding space-time
2. surface tension effect of bell curve shaped gravity field
3. application of mass-energy equivalence to Einstein field equation
4. result of variable surface/volume ratio to energy spreading by principle of least action
But we can use even more illustrative explanation, linked to dispersion of energy by background field of CMB photons formed by gravitational waves (GWs), which manifests like weak deceleration equivalent to product of Hubble constant and speed of light. This dispersion is direct manifestation of hidden dimensions on both large scales, both small scales, because it manifests as a shielding effect of these photons at Casimir force distance scale. We can say, Casimir force is a shielding effect of GWs, whereas the Pioneer anomaly is subtle deceleration effect caused by dispersion by GWs. Both these forces results in violation of Newton law at small scales, which manifests itself by anomalous deceleration at large scales and as such it violates the equivalence principle of general relativity - it's as easy, as it is.
We can even find a direct analogy of this deceleration in our "pocket model" of observable Universe at water surface. From local perspective of every observer, whose size is evolutionary adjusted to wavelength of capillary waves (human distance scale) such surface is covered mostly by transversal waves, where the energy spreads in maximal speed from his insintric perspective, so he can interact with largest space-time possible (the speed of transversal waves is minimal from exsintric perspective, instead).
But the particle character of water environment manifests by dispersion of surface waves by tiny density fluctuations of underwater, which results into gradual change of transversal character of capillary waves into longitudinal one (i.e. into gravity waves). This dispersion decreases the speed of waves from exsintric perspective, which manifests like omni directional Universe expansion from insintric perspective or like subtle deceleration, which effectively freezes the spreading of surface waves, which can be interpreted like spreading of these waves in environment of gradually increasing density. We can observe this effect easily by splash ripples, formed by capillary waves. On the example bellow such waves are formed by bursting of bubbles at water surface, which can be interpreted like radiative decay of unstable particle in vacuum into gamma photons. By this interpretation dark matter effects, like Pioneer anomaly are related closely to the Universe expansion: for example the anomalous deceleration of Pioneer spacecraft (0.87 ± 0.13 nm/s2) is equal to product of Hubble constant and speed of light (a = Hc), which agrees well (±10% error) with value observed.
From this perspective every object is surrounded by virtual massive field which originates from massive field of virtual photons, i.e. the field of density fluctuations, which are manifesting in GWs formed by gravitons expanded by inflation and which is forming vacuum foam - and in this context it's quite natural and easily predictable effect following from AWT directly. Just the immense density of vacuum and common disbelief in Aether concept has caused, the effect of background field dispersion wasn't linked to dark matter observations and Pioneer anomaly before many years. Here's still plenty of room "at the bottom" of basic human understanding. Note that in this context the further search for GWs has no meaning, because we have observed them already like background noise of GWs detectors and their scope is limited by Casimir force scope in the same way, like scope of extradimensions and Lorentz symmetry violation at low scale.
As J.C. Cranwell (archive) pointed out, prof. Stephen Hawking has blundered by his own image... This picture comes from his book "A briefer history of time" at page 29 and it illustrates the energy wave spreading in particle environment. It's easy to see the waves getting further apart from each other as time increase, while Hawking is still claiming, the Lorentz invariance is "difficult to reconcile" with Newton theory. Of course it is, because it leads not only into Lorentz invariance, but into dark matter and expanding universe observations. This example just illustrates, how everyone sees, what he wants to see and Hawking the physmatic sees waves of constant wavelength in picture, which illustrates exactly the opposite.
Albert Einstein "You do not really understand something unless you can explain it to your grandmother."
Zephir said...
What I found on the web just by now...
Gravitons as a spacetime fabric, string theory is just a failed aether theory, Lubos Motl and Peter Woit...
El Cid said...
Better prove that I'm wrong:
Well, I'm going to use logic, if AWT is correct, then you could solve a very simple physics problem. I'm going to challenge you to solve the following problem using only the AWT postulates, as you say. The solution are a couple of numbers. Neither stories of the strange things, nor paints are permitted. You have to show how you obtain the two numbers using the postulates of AWT, ie, you must use deductive reasoning from the AWT postulates. If you can solve this problem, you win, otherwise you're a quack. Well, The Problem, one, two, three, go out ...
A stone thrown from the floor is given an initial velocity of 20.0 m/s straight upward. Determine the time at which the stone reaches its maximum height and the time at which the stone returns to the point from which it was thrown.
Zephir said...
/*..if AWT is correct, then you could solve a very simple physics problem...*/
For example quantum mechanics doesn't recognize gravitational constant, so your trivial task would be unsolvable with using of quantum mechanics.
Does it mean, quantum mechanics is crackpot theory, if it cannot face such trivial assignation? If not, why just AWT should be?
Zephir said...
Despite of it, AWT is still the only concept, which can explain in independent way, why gravity force is indirectly proportional to square of distance (compare the Duillier - Le Sage theory of gravity).
El Cid said...
Another chance,
you should forget QM and solve the problem using deductive reasoning from AWT principles, I only want two numbers and its units of measurement.
Zephir said...
Why I should forget QM? Try to prove first, your assignation is solvable in this mainstream theory.
If it's not, you shouldn't blame AWT from incompetence.
El Cid said...
I'm going to solve the proposed problem using QM, with some valid aproximations.
We use the following notation:
V(x) is the potential enegy
|f] is the wave packet for the particle that defines the state of the particle.
X is the position operator.
P is the momentun operator.
V = V(X) is the potential enegy operator.
[X] = [f|X|f] is the expectation vaue for the postion operator X in the state |f]. [X] is the center of the wave packet at the instant t.
[P] = [f|P|f] is the expectation vaue for the momentun operator P in the state |f]
a=a is approximately equal
Int(-Inf,Inf) is the improper integral over the real numbers.
v0 = 20 m/s is the initial velocity
x0 = 0 m is the initial position
g = 9,8 m/s^2 is the acceleration due to gravity at sea level (the only parameter that is need to introduce).
El Cid said...
From the Ehrenfest's Theorem, we get:
d [X] /dt = 1/m [P]
d[P]/ dt = - [grad V] = - [dV/dx]
[P] = m d[X]/dt ;
m d^2[X]/dt^2 = - [dV/dx]
I'm going to show that [dV/dx] a=a (dV/dx)(x=[X]), indeed:
[dV/dx] =
Int(-Inf,Inf) f*(x)(dV/dx)f(x)dx a=a
(dV/dx)(x=[X]) Int(-Inf,Inf)f*(x)f(x)dx
This aproximation is valid because the wave packet f(x) are much smaller than the distances over which (dV/dx) varies appreciably. The wave packet f(x) doesn't vanishes in an interval centered in [X]. (dV/dx) doesn't varies appreciably in this interval.
m d^2[X]/dt^2 = - (dV/dx)(x=[X]) namely the Newton's second law.
The potential energy for the stone is V = mg[X] where [X] is the height of the stone.
(dV/dx)(x=[X]) = mg;
d^2[X]/dt^2 = -g;
d[X]/dt = -gt + vo;
[X] = -1/2 g t^2 + v0 t + x0
d[X]/dt = -9,8t + 20
[X] = -1/2 * 9,8 t^2 + 20 t
And now the two numbers:
1) The time at which the stone reaches its maximum is when d[X]/dt = 0
0 = -9,8t +20; t1 = 2,04 s
2) The time at which the stone returns to the point from which it was thrown
0 = -1/2 * 9,8 t^2 + 20 t;
0 = t(20-1/2*9,8t); t2 = 4,08 s.
Zephir, you're a true quack.
Zephir said...
/*...the potential energy for the stone is V = mg[X] ...*/
OK, and from where you get this equation? Isn't it derived from Newton's theory? If yes, why not to use the Newton's theory from its very beginning? Ehrenfest's theorem itself is derived under assumption, Hamiltonian has the same form as in classical physics H = V^2/pm = 1/2m.Sum(i=1)^3 V i^2...
In this way, whole your derivation is just a sort of circular reasoning: you're deriving effect of classical physics by using of theorems, which were derived just by using of classical physics approximation (in fact it's just reversed case of classical derivation of Ehrenfest's theorem as given in various textbooks).
Zephir said...
/*...the wave packet f(x) doesn't vanishes in an interval centered in [X] ...*/
This is just an assumption of yours borrowed from classical physics again - but not from QM. By Schrodinger equation such object would vanish in initial speed, corresponding the speed of light. By quantum mechanics such object wouldn't reach it's maximal height - instead of it would create a stable Rydberg orbital in X/2 height, surrounding the whole Earth.
Zephir said...
When people dating, they refute to know, what they are getting into..
This is what the love is called...
El Cid said...
In QM, there is an observable (hermitian operator) called hamiltonian H. In one dimension, the hamiltonian is defined as H = P^2/2m + V(X) where P and X are the momentum operator and the position operator, respectively. We can define V(X) = mg X if we want, not matter if it's functionally equal to the gravitational potential energy in classical physics. But in QM, V(X) is an Hermitian Operator while in CM is a function. The Schrödinger equation is H|vi] = E|vi], where Ei are the eigenvalues and |vi] are the eigenfunctions of the operator H. To obtain Ei and |vi], we must solve a differential equation of type
y'' + Ax y = 0.
The wave packet can be expresed as
|f ]=Sum(i,|vi]). The wave packet represents the state of the particle, in this case the stone. I've considered that stone is punctual, i.e., an elemental particle. In QM, [X] is not the position of the particle (stone), but, we can consider a ball centred at [X], where it's very likely to find the particle. I'd like that you realised that [X] is moved according to the Newton's second law. It can be shown that CM is a limit of the QM.
By the way, in this particular case, we don't need to make the approximation:
... the wave packet f(x) are much smaller than the distances over which (dV/dx) varies appreciably ...
because the equality:
[dV/dx] =
is exact. It's equal to mg.
Sorry, If I've insulted you, but I was very angry because you criticised to me. I think you should agree that I've solved the problem using QM.
Zephir said...
/*..we can define V(X) = mg X if we want..*/
Sorry - I know, it's quite natural for you to think in such straightforward way and to mix various theories and theorems into single one - but this equality has nothing to do with quantum mechanics, because quantum mechanics doesn't know, what the "g" is. Not saying, the result is unphysical from QM perspective with respect to insintric vanishing of every QM packet, as you mentioned above.
If you get angry so easily, when somebody criticizes you, you should be more careful, when you do the same against someone else. My description of reality cannot dependent on fact, we can derive formal model of it - or not. For example the turbulence, formation of galaxy or density fluctuations inside of gas exists, albeit we still have no formal description of such phenomena. Consecutive logics of formal math is apparently less effective, when parallel systems of many particles are involved.
We can still model these phenomena in computer simulation at particle level by cellular automata models, which doesn't require to introduce any physical model with measured constants into description (lattice-Boltzmann models, for example).
El Cid said...
Well Zephir,
You win, I've been unable to resolve the problem using QM. But I'm not physicist, I'm the ignorant one. But the problem can be resolved using QM, and any physicist could have solved this trivial problem. Now, why don't you solve it using AWT?
Zephir said...
I win, because I have/use more general insight into situation. If you derive whatever equation, I can demonstrate rather easily, such description has its own limits.
I've lost, because my general approach doesn't enable me to model particular situations exactly. I can say, we can use Boltzman gas simulation on strong computer at least conceptually, blah, blah...
But I can still cannot demonstrate any exact particular solution in real time without ad-hoced simplifications, which in turn would violate fundamental AWT principles at nonlocal scale.
As you can see, whole AWT is about dualities of reciprocal approaches. The intuitive approach diverges from exact approach and you should always decide, which approach is more usefull for you. Common people would revise the results of formal thinkers in intuitive way, while formal thinkers would rectify their intuitive extrapolations by formal models.
James said...
Right, I'm agreeing with El Cid on this one as he seems to have a much better grasp of physics than you do. He asked you to solve a simple problem and you couldn't, you spluttered and coughed but there was no solid answer therefore leading me to deduce that you haven't the faintest notion what your talking about please feel free to prove me wrong with mathematics preferably.
Besides wasn't an Aether disproved in the early 18th century?
Zephir said...
/* ...he seems to have a much better grasp of physics than you do...*/
This is irrelevant to what I'm writing here. If you can refute a single sentence from my whole blog, you're welcomed to do so.
Concerning the ElCid textbook example, if I would be convinced, free fall can be solved in quantum mechanics, I'd propose some solution already (1,2, 3, 4).
But as far I know, quantum physics does involve neither gravity force, neither gravity constant in its repository, so such attempt is ridiculous at the first sight from my perspective. You can only do it by combining of equations from different theories, Newtonian dynamics in particular.
If you or ElCid didn't realize it, why it should be just my problem in understanding of physics?
Zephir said...
/*...wasn't an Aether disproved in the early 18th century?...*/
Wasn't Aether disproval disproved in early 21 century by me?
James said...
No it wasn't no experimentation, no maths therefore no proof or even for that matter a viable theory
Zephir said...
Fortunately contemporary physics has a number of unexplained experiments already, which can be used as a logical evidence of many new theories, not just AWT.
It means, no new experiments and formal math are necessary for AWT reasoning, predicate logics and existing observations are enough.
Zephir said...
AWT explains dark matter and omnidirectional universe expansion by model of ripple waves dispersion at water surface.
This dispersion decreases the speed of waves from exsintric perspective, which manifests like omni-directional Universe expansion from insintric perspective or like subtle deceleration, which effectively freezes the spreading of light waves, which can be interpreted like spreading of these waves in environment through mass/energy density gradient of vacuum, i.e. like dark matter. Such model leads to testable predictions: for example the deceleration of Pioneer spacecraft is equal to product of Hubble constant and speed of light, which agrees well (+15% error) with value observed.
Zephir said...
In 28 pages review you get a extensive review of the current theory and understanding of rapidly expanding universe via cosmic acceleration (available online for free within the first month of publication).
Zephir said...
Theory of field interactions by T.B.Bon, containing some arithmetic about Doppler effect of "detuned light" spreading through infinite Universe.
Zephir said...
Modified gravity as an alternative to dark matter
TeVeS is one of the best extrapolations of relativity, but it still cannot address well all aspects of dark matter, where its particle character manifests. In AWT dark matter is formed both by space-time deformation, both by particles of matter trapped into it.
Zephir said...
Dark Matter gone missing in many places: a crisis of modern physics?
Zephir said...
The behavior of dark matter can be understood quite well by the parabiosis of scientists and protoscientists (so-called crackpots), which particularly the Web 2.0 technology enabled. The scientists tend to form cohesive group and they tend to repel crackpots from their center. The crackpots are usually individualists and they don't form coalitions - so they're acting in diaspora. They're attracted to scientists and scientific findings though and they tend to surround them. They're particularly sensitive to trends in accidental findings and you can usually find the "dark strings" of crackpots there. In AWT universe the dark matter plays a role of incubator of new galaxies, while the existing clusters of normal matter will gradually dissolve into radiation and neutrinos, which serve as a material for new dark matter clusters. You may observe often, that the elderly scientists often becoming crackpots or they're engaged in suspicious research (like the cold fusion) at least.
You may think, that the dark matter is formed with mutually gravitationally repulsive particles (it has opposite gravitational charge to normal matter), so it tends to fill the cosmic space in uniformly thin manner (it represents the "missing antimatter" of the Universe in this way). The proximity of normal matter (which is gravitationally attractive by itself) leads to the concentration of dark matter at the perimeter of massive objects. When three or more massive objects appear at single line, then the dark matter tends to concentrate along this line too, because its mutual repulsion is shielded with said massive objects along this line and it forms the dark matter fibers. Of course, such a behavior of dark matter has nothing to do with MOND theory and it essentially violates it instead.
Zephir said...
List #2: extra-dimensions, scalar field, quintessence, mirror matter, quantum gravitation, axions, dilatons, inflatons, heavy and dark photons, leptoquarks, dark atoms, fat strings and gravitons, magnetic monopoles and anapoles, sterile neutrinos, colorons, fractionally charged particles, chameleon particles, dark fluid and dark baryons, fotinos, gluinos, gauginos, gravitinos and sparticles and WIMPs, SIMPs, MACHOs, RAMBOs, DAEMONs, Randall-Sundrum 5-D phenomena (dark gravitons, K-K gluons a microblack holes.) |
b311720e69dfaf53 | A- A A+
Free Email News
by Dr John Hartnett
US $14.00
View Item
Distant Starlight - A Forum DVD
by Dr John Hartnett, Dr Russell Humphreys
US $13.00
View Item
World Winding Down
by Carl Wieland
US $10.00
View Item
Understanding the Law of Decay DVD
by Dr Carl Wieland
US $13.00
View Item
Should creationists accept quantum mechanics?
Spectrum rainbow
The spectrum in a rainbow
Credit: Wikipedia
Published: 25 November 2011(GMT+10)
Quantum mechanics is one of the brand new ideas to emerge in physics in the 20th century. But is it something creationists should believe? I argue “yes” for two reasons:
1. The evidence supports it: QM solved problems that baffled classical physics, and has passed numerous scientific tests.
2. Fighting against an operational science idea would mean fighting a battle on two fronts. So there is nothing to be gained by diverting our energies, in an area that does nothing to further the creation cause.
Although quantum mechanics is rather outside the scope of our ministry, since it concerns operational science rather than origins, we do receive questions about QM quite often. And we also sometimes receive requests to sponsor various critics of this field. This paper tries to summarize, with as little technical detail as possible, why QM was developed, the overwhelming evidence for it, as well as the lack of any viable alternative. Finally, the pragmatic issue: jumping on an anti-QM bandwagon would just make our job harder and provide not the least benefit to the creation cause.
Backdrop: Classical (Newtonian) physics
Sir Isaac Newton (1642/3–1727) was probably the greatest scientist of all time, discovering the spectrum of light as well as the laws of motion, gravity, and cooling; and also inventing the reflecting telescope and jointly inventing calculus. Yet he wrote more about the Bible than science, and was a creationist1 (and nothing discovered after Darwin would change that).2
Newton’s prowess in science was such that English poet Alexander Pope (1688–1744) wrote the famous epitaph:
The Creation/Fall/Flood is a historical framework taught by the Bible; classical physics is at best just a model to explain how God upholds His creation, not a direct teaching of Scripture. So disagreements with classical physics are in no way like the contradictions of biblical history by uniformitarian geologists and evolutionary biologists.
Nature and nature’s laws lay hid in night;
God said “Let Newton be” and all was light.
Such was his influence that Albert Michelson (1852–1931), the first American to win the Nobel Prize in physics, asserted:
Rather, all that remained, he thought, was more and more precise measurements. He quoted the creationist physicist William Thomson, 1st Baron Kelvin (1824–1907): “the future truths of physical science are to be looked for in the sixth place of decimals.”
Now such statements mainly produce mirth. Even Kelvin himself recognized two “dark clouds” hanging over classical physics, which known theories could not explain:
1. The experiment of Michelson and Morley (1838–1923) that showed effectively no difference in the measured speed of light regardless of direction—to be solved by Einstein’s theory of special relativity, which is outside the scope of this article. Suffice it to say, Einstein made it clear that he deduced many of his ideas from the electromagnetism equations of the great James Clerk Maxwell, a great creationist classical physicist.4 Furthermore, Relativity hasn’t the slightest thing to do with moral relativism: Relativity replaces absolute time and space with another absolute: the speed of light in a vacuum. To underscore this point, Einstein himself preferred the term ‘Invariance Theory’. Finally, creationist physicist Dr Russell Humphreys showed that relativity was an ally of creation, not a foe, and most creationist physicists since then have agreed.
2. Black body radiation, which as will be shown, was one of the mysteries to be solved by quantum mechanics.
Three clouds
Actually, there were three main problems that stumped Newtonian ‘classical’ physics, and quantum mechanics solved them. Despite what some claim, QM is totally unlike Darwinian evolution: QM was driven by unsolved problems and supported by the evidence, and not with any hidden agenda against a Creator. Furthermore, most of the pioneers were reluctant to abandon classical physics.
Another point which seems to be forgotten by some QM critics: the Creation/Fall/Flood is a historical framework taught by the Bible; classical physics is at best just a model to explain how God upholds His creation, not a direct teaching of Scripture. So disagreements with classical physics are in no way like the contradictions of biblical history by uniformitarian geologists and evolutionary biologists.
We also should notice how many of the discoveries that led to QM were rewarded with a Nobel Prize for Physics. By contrast, one gripe of evolutionists is the lack of an award for evolutionary biology;5 Nobel Prizes are awarded only for practical, testable science.6
1. Blackbody radiation
A blackbody is an idealized perfect absorber of all radiation, and as a consequence, is also a perfect emitter. The best approximation to this is a material called super-black, with tiny cavities, actually modeled on the wing rims of certain butterflies.7
Max Planck
Max Planck (1858–1947)
Classical physics predicted that the black body would be a ‘vibrator’ with certain modes, which had different frequencies. And it also predicted that every mode would have the same energy, proportional to temperature (called the Equipartition Theorem). The problem is that there would be more modes at short wavelengths, thus high frequencies, so these modes would have most of the energy. Classical physics led to the Rayleigh–Jeans Law,8 which stated that the energy emitted at a given frequency was proportional to the fourth power of that frequency.
This worked well for low frequencies, but predicted that the radiation would be more and more intense at higher frequencies, i.e. the ultraviolet region of the spectrum and beyond. In fact, it would tend towards infinite energy—clearly this is impossible, hence the term ‘ultraviolet catastrophe’.
Max Planck (1858–1947) solved this problem. Instead of the classical idea, that any mode of oscillation could have any energy, he proposed that they could have only discrete amounts—packets of energy proportional to the frequency. That is, E = hν, where E is energy, ν (Greek letter nu) is frequency, and h is now called Planck’s constant.9 This meant that a mode could not be activated unless it had this minimum amount of energy. The new Planck’s Law matched the observations extremely well at both high and low frequencies. He won the 1919 physics Nobel “in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta.”10
Actually, Planck himself was not thinking that he had solved a catastrophe, just that his idea fitted the data well. Rather, he rightly realized that the equipartition theorem was not applicable.11 Interestingly enough, he was sympathetic to Christianity and critical of atheism.12
2. Photo-electric effect
We all know about solar cells now, but over a century ago, the photo-electric effect behind them was a mystery. It was discovered that light could knock electrons out of a material, but the electron energy had nothing to do with intensity of the light, but rather with the frequency. Furthermore, light below a certain threshold frequency had no effect. Very curiously: bright red light (low-frequency) would not work, while faint ultraviolet light (high-frequency) would, even though the energy of the red light was far greater in such cases.
Einstein solved this by proposing that light itself was quantized: came in packets of energy:
According to the assumption to be contemplated here, when a light ray is spreading from a point, the energy is not distributed continuously over ever-increasing spaces, but consists of a finite number of energy quanta that are localized in points in space, move without dividing, and can be absorbed or generated only as a whole.13
Only if the energy packet were greater than the binding energy of the electron would it be emitted. The resulting electron energy would be the difference of the light packet energy and binding energy. So while Planck proposed quantized oscillators, Einstein proposed that electromagnetic radiation was quantized.
It was explicitly for this discovery, not relativity, that Einstein was awarded the 1921 Nobel Prize for Physics:
… for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect.14
Einstein called this Lichtquant or light quantum, but the American physical chemist Gilbert Newton Lewis (1875–1946) coined the term photon15 which stuck.
Ironically, like Planck, Einstein didn’t conceive himself as anything more than a classicist. He later vocally opposed the prevailing quantum mechanical interpretations by the Dane Niels Bohr (1885–1962), now called the Copenhagen Interpretation.
3. Atoms
Newton’s discoveries in the spectrum of light presumed that colour was continuous. But when the spectra of individual atoms were measured, they emitted light at discrete frequencies (or absorbed it—dark lines in a “white light” spectrum).
Furthermore, the New Zealander physicist Ernst Rutherford (1871–1937) showed that most of the mass of the atoms was concentrated in a tiny positively charged nucleus, and proposed that electrons orbited like the planets around the sun. The Rutherford model is iconic—it’s what most people imagine when they think of atoms, and is even used in the logo of the United States Atomic Energy Commission and the flag of the International Atomic Energy Agency. Rutherford inexplicably missed out on the Nobel Prize for Physics—instead, the Nobel Prize committee magically transformed him into a chemist, awarding him the Chemistry Prize instead, “for his investigations into the disintegration of the elements, and the chemistry of radioactive substances.”16
However, classical physics predicted that orbiting charged particles like electrons would lose energy to electromagnetic radiation. So their orbits would decay. This, of course, is not what is observed.
To solve this problem, Bohr proposed in 1913 that electrons could only move in discrete orbits, and that these orbits were stable indefinitely. Energy was gained or lost only when the electrons changed orbits, absorbing or emitting electromagnetic radiation—photons of frequency ν = E/h, where E is the energy difference between the states. For electrons in higher energy or ‘excited’ states, this transition would mostly be spontaneous.
Stimulated emission and lasers
In 1917, Einstein realized that a photon with the same energy as the energy difference could increase the probability of this transition.17 Such stimulated emission would produce another photon with the same energy, phase, polarization and direction of travel as the incident photon. This was the first paper to show that atomic transitions would obey simple statistical laws, so was very important for the development of QM. On the practical side, it is immensely valuable, because it is also the basis for masers and lasers. These words were acronyms for Microwave/Light Amplification by Stimulated Emission of Radiation. As a result:
The Nobel Prize in Physics 1964 was divided, one half awarded to Charles Hard Townes, the other half jointly to Nicolay Gennadiyevich Basov and Aleksandr Mikhailovich Prokhorov “for fundamental work in the field of quantum electronics, which has led to the construction of oscillators and amplifiers based on the maser-laser principle.”18
My own green laser pointer relies on an additional QM effect called “second harmonic generation” or “frequency doubling”. Here, two photons are absorbed in certain materials with non-linear optics, and a photon with the combined energy is emitted. In this case, an infrared source with a wavelength of 808 nm pumps an infrared laser with a lower energy of 1064 nm, and this frequency is doubled to produce a green laser beam of 532 nm.
Bohr mocel
Rutherford–Bohr model of the hydrogen atom. Credit: Wikipedia
Bohr’s model strictly applied only to one-electron atoms such as H, He+, Li2+ etc., but he extended it to multi-electron atoms. He proposed that these discrete energy levels could hold only a certain number of electrons—electron shells. This explains the relative inertness of the ‘noble gases’: they already have full shells, so no need to chemically react with another atom to achieve them. It also explains the highly reactive alkali metals: they have one electron over, so can lose it relatively easily to achieve the all-full shell configuration; and the halogens are one electron short, so vigorously try to acquire that one remaining electron from another atom. An illustration of both is the alkali metal halide sodium chloride.
High-school chemistry typically doesn’t go past the Bohr model approach. University chemistry tends to go deeper into more modern quantum mechanics (atomic and molecular orbital theory), of which the Bohr model was a pioneering attempt. Bohr won the physics Nobel in 1922 “for his services in the investigation of the structure of atoms and of the radiation emanating from them.”19
Like Heisenberg and Einstein, Bohr was not happy with aspects of quantum mechanics. In Bohr’s case, for a long time, he was a determined opponent of the existence of photons, trying to preserve continuity in electromagnetic radiation. Bohr also introduced the ‘correspondence principle’: that the new quantum theory must approach classical physics in its predictions when the quantum numbers are large (similarly, relativity theory collapses to ordinary Newtonian physics with velocities that are much smaller than that of light).
Wave-particle duality
The French historian-turned-physicist Louis-Victor-Pierre-Raymond, 7th duc de Broglie (1892–1987) provided another essential concept of quantum mechanics. Just as energy of vibrators and electromagnetic radiation was quantized into discrete packets with particle-like properties, de Broglie proposed that all moving particles had an associated wave-like nature. The wavelength was inversely proportional to momentum, again using Planck’s Constant: λ = h/p, where λ (Greek letter lambda) is wavelength, and p = momentum. This was the subject of his Ph.D. thesis in 1924.20 His own examiners didn’t know what to think, so they asked Einstein. Einstein was most impressed, so de Broglie was awarded his doctorate. Only five years later, he was awarded the Physics Nobel “for his discovery of the wave nature of electrons.”21
It is notable that this prize was awarded before the wave nature of electrons was proven. This happened beyond reasonable doubt when Clinton Joseph Davisson (1881–1958) and George Paget Thomson (1892 –1975) were awarded the 1937 Physics Nobel “for their experimental discovery [made independently of each other] of the diffraction of electrons by crystals.”22 Thomson was the son of J.J. Thomson (1856–1940), who discovered the electron itself. For example, electrons can produce the classic ‘double slit’ interference pattern of alternating ‘light’ and ‘dark’ bands. This pattern is produced even when only one electron goes through a slit at a time.
The discovery of matter waves was instrumental for electron microscopes. These allow smaller objects to be seen than optical microscopes, because the electrons have a smaller wavelength than visible light. The same principle is used for probing atomic arrangements with neutron diffraction—neutrons are almost 2,000 times more massive than electrons, so normally have much more momentum, thus an even smaller wavelength.
Thus de Broglie showed that at a foundational level, both radiation and matter behave as both waves and particles. Writing almost half a century later, he recalled:
Mathematical formulations
In 1925, Werner Heisenberg (1901–1976) formulated a mathematical model to explain the intensity of hydrogen spectral lines. He was then the assistant of Max Born (1882–1970), who recognized that matrix algebra would best explain Heisenberg’s work. Heisenberg was recognized with the 1932 physics Nobel for “for the creation of quantum mechanics, the application of which has, inter alia, led to the discovery of the allotropic forms of hydrogen.”24
The following year, Erwin Schrödinger (1887–1961) developed de Broglie’s ideas of matter waves into the eponymous Schrödinger equation. This describes a physical system in terms of the wavefunction (symbol ψ or Ψ—lower case and capital psi), and how it changes over time. For a system not changing over time, ‘standing wave’ solutions allow the calculation of the possible allowable stationary states and their energies. This brilliantly predicted the energy levels of the hydrogen atom. Later these stationary states were called atomic orbitals. Applied to molecules, they are molecular orbitals, without which much of modern chemistry would be impossible. Other applications of this equation included the calculation of molecular vibrational and rotational states.
Schrödinger’s treatment, as he showed, was equivalent to Heisenberg’s: the stationary states correspond to eigenstates, and the energies to eigenvalues (eigen is the German word for ‘own’ in the sense of ‘peculiar’ or ‘characteristic’). The overall wavefunction could be considered as a superposition of the eigenstates. As Einstein warmly embraced de Broglie’s idea, he did the same to Schrödinger’s, as a more ‘physical’ theory than Heisenberg’s matrices. In 1930, Paul Dirac (1902–1984) combined the two into a single mathematical treatment. Schrödinger and Dirac shared the 1933 Nobel Prize for Physics “for the discovery of new productive forms of atomic theory.”25
Schrödinger was another reluctant convert to QM—he hoped that his wave equation would avoid discontinuous quantum jumps. But he was due to be disappointed: in 1926, Max Born showed that Ψ didn’t have a physical nature; rather, the square of its magnitude |Ψ|2 (or Ψ*Ψ) is proportional to the probability of finding the particle localised in that place. For political reasons, with the developing turmoil of the rise of National Socialism in his country, Germany, Born wasn’t awarded the Nobel Prize for physics until 1954, a half share “for his fundamental research in quantum mechanics, especially for his statistical interpretation of the wavefunction.”26
Weird things
Here is where we find the root of much opposition: the apparently strange things that quantum mechanics predicts.
Uncertainty principle
Heisenberg recognized a fundamental limit to what could be measured. E.g. try to measure the position and momentum of an electron as finely as possible by shining a light photon on it. To finetune the position better, we need a small wavelength. But as de Broglie showed, the shorter the wavelength, the larger the momentum, thus the more that can be transferred to the electron. Thus the electron’s momentum cannot be known precisely. And if we reduce the momentum of the photon to avoid disturbing the electron too much, the wavelength increases, so its position becomes less certain—it is smeared out in space. Thus as Heisenberg said: “It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.”27 To be precise, the uncertainty in position and momentum is related to Planck’s Constant ΔxΔp ≥ h/4π. The same applies to energy and time: ΔEΔt ≥ h/4π.
Actually, there was a precedent for this in the remarkably productive mind of Einstein: he had recognized that there would be a residual energy even at absolute zero, which he called Nullpunktsenergie,28 or in English zero-point energy. It is easily explained in terms of the uncertainty principle: if there were a zero-energy state in some crystal lattice with fixed atomic positions, it would entail that the atoms’ positions and momenta could be known with total precision. To avoid this, there must be some residual energy.
This is actually proved by the inability to solidify helium no matter how cold, except under very high pressures (25 atmospheres): the zero-point energy would shake any solid lattice apart.
But despite Einstein’s contribution, he detested the uncertainty principle. In the years around 1930, he debated Bohr on various ways around it. These two admired each other greatly, but most physicists thought that Bohr had the better of the arguments—in one famous riposte, he used Einstein’s own theory of general relativity to defeat an ingenious thought experiment.
Interpretations of QM
Thus some creationist (and non-creationist) physicists accept QM but propose a more realist interpretation just as Einstein and Schrödinger advocated. E.g. physicist Dr Russell Humphreys explains (personal communication):
But many of the creationist critics of QM confuse QM with interpretations of QM.
Another strange effect is “entanglement”: two particles interact and thus share the same quantum state until a measurement is made. But we do know something about them, say that their ‘spins’ must be opposite, just that we don’t know which one has which spin. Then the particles go their separate ways. Then we measure one of them, and find that it has, say, anticlockwise spin. This means that the other one must instantly adopt clockwise spin—and so it will prove when it’s measured at any later time, as long as the entanglement is not otherwise disrupted. Both Einstein and Schrödinger disliked the apparent implication that this correlation would travel much faster than light. But many experiments are consistent with this implication, for example one with entangled photons:
The results also set a lower bound on the ‘speed of quantum information’ to 2/3 ×107 and 3/2 ×104 times the speed of light in the Geneva and the background radiation reference frames, respectively.31
To put this into perspective, Newton’s conception of gravitation was criticized at the time for postulating an ‘occult’ action-at-a-distance force which he thought acted instantly (under General Relativity, the force of gravity moves at the speed of light). There is no reason why God’s upholding of His creation (cf. Colossians 1:15) should be limited by the speed of light, especially as God is the creator of time itself.
More evidence
I could not have worked in my own specialist area of spectroscopy unless molecules had quantized energy states, especially in vibrational energy in my case, but electronic states and rotational states as well.
Superconductors and superfluids
Other interesting evidences include superconductors, which I have also researched,32 and superfluids. These are substances with exactly zero resistivity and zero viscosity, respectively.
These are rare examples of quantum behaviour on the macro level. They are related to yet another prediction by Einstein, this time with Satyendra Nath Bose (1894–1974): they realized that at very low temperatures, the wavefunctions of identical particles could overlap to form a single quantum state, now called a Bose–Einstein Condensate.
This easily explains why it’s possible to have zero resistance and viscosity. A current of electrons or fluid usually loses energy to the surrounding materials, but if they are in one quantum state, any possible energy loss would be quantized, thus could not occur below this threshold. Superfluids also exhibit quantized vortices.
Woodward–Hoffmann rules for electrocyclic reactions
One class of organic reactions is electrocyclic, where a conjugated unsaturated “straight” chain hydrocarbon closes into a ring, or the reverse. To do this, there must be some rotation—either the two ends must rotate both clockwise/both anticlockwise, or conrotatory; or one clockwise and the other anticlockwise, or disrotatory. Whether it’s conrotatory or disrotatory turns out to be completely determined. Robert Burns Woodward (1917–1979) and Roald Hoffmann (1937– ) worked out the eponymous rules, based on the conservation of symmetry of the molecular orbitals, which no known classical model could predict.
In particular, the lobes of the molecular orbital can form a bond only if the wavefunction has the same sign (positive or negative), and this can be achieved only by rotation in one of the two possible types (conrotatory or disrotatory). Furthermore, a photochemical reaction turns out to have the opposite symmetry, also explained because the photon excites an electron into another orbital with a different symmetry.
Hoffmann shared the 1981 Nobel with Kenichi Fukui (1918–1998) “for their theories, developed independently, concerning the course of chemical reactions.” Woodward had died before he could be awarded his second Nobel Chemistry Prize.
Designs in nature using QM
Another good reason to support QM is that it is proving to be an ally of the creation model. Some time ago I wrote on how our sense of smell works in accordance with vibrational spectroscopy and quantum mechanical tunneling:
Luca Turin, a biophysicist at University College, London, proposed a mechanism [33,34] where an electron tunnels from a donor site to an acceptor site on the receptor molecule, causing it to release the g-protein. Tunnelling requires both the starting and finishing points to have the same energy, but Turin believes that the donor site has a higher energy than the receptor. The energy difference is precisely that needed to excite the odour molecule into a higher vibrational quantum state. Therefore when the odour molecule lands, it can absorb the right amount of the electron’s energy, enabling tunnelling through its orbitals. This means the smell receptors actually detect the energy of vibrational quantum transitions in the odour molecules, as first proposed by G.M. Dyson in 1937.35
More recent support comes from studies in bird navigation. For some time now, it has been known that birds and many other creatures use the earth’s magnetic field.36 But in European robins, red and yellow light somehow disorients their magnetic sense. So some researchers proposed that light causes one of the eye proteins to emit a pair of ‘entangled’ electrons with opposite spins. Again, we don’t know which is which until a measurement occurs, and here this ‘measurement’ is caused by some difference in the earth’s magnetic field. Thus the other electron must instantly adopt the opposite spin, which the bird detects and somehow computes the information about the magnetism. The birds are disoriented by weak oscillating magnetic field, which could not affect a macro-magnet like a magnetite crystal, but would disrupt an entangled pair.37
The history and practice of QM shows no hidden motivation to attack a biblical world view, in contrast to uniformitarian geology and evolutionary biology. Any proposed replacement theory needs to explain at least all the observations that QM does. This is not a specifically creationist project.
A recent paper paid its usual vacuous homage to evolution:
In artificial systems, quantum superposition and entanglement typically decay rapidly unless cryogenic temperatures are used. Could life have evolved to exploit such delicate phenomena? Certain migratory birds have the ability to sense very subtle variations in Earth’s magnetic field. Here we apply quantum information theory and the widely accepted “radical pair” model to analyze recent experimental observations of the avian compass. We find that superposition and entanglement are sustained in this living system for at least tens of microseconds, exceeding the durations achieved in the best comparable man-made molecular systems. This conclusion is starkly at variance with the view that life is too “warm and wet” for such quantum phenomena to endure.38
Of course, this is more evidence of a Designer whose techniques far exceed the best that man can do—in this case, maintaining quantum entanglement far longer than we can!39
Also, supposedly primitive purple bacteria exploit quantum mechanics to make their photosynthesis 95% efficient. They use a complex of tiny antennae to harvest light, but this complex can be distorted which could harm efficiency. However, because of the wave and particle nature of light and matter, although it absorbs a single photon at a time, the wave nature means that the photon is briefly everywhere in the antenna complex at once. Then of all possible pathways, it is absorbed in the most efficient manner, regardless of any shape changes in the complex. As with the previous example, quantum coherence is normally observable at extremely low temperatures, but these bacteria manage at ordinary temperatures.40
Quantum mechanics really works, and has been strongly supported by experiment. The history and practice of QM shows no hidden motivation to attack a biblical world view, in contrast to uniformitarian geology and evolutionary biology. Any proposed replacement theory needs to explain at least all the observations that QM does. This is not a specifically creationist project.
It seems wise for creationists to adopt the prevailing theories of operational science unless there are good observational reasons not to. Otherwise it could give the impression that we are anti-establishment for its own sake, rather than pro-Bible and opposing the establishment only when it contradicts biblical history. Fighting on two fronts has usually been a losing battle strategy. Rather, as previously with relativity, it makes more sense to co-opt it as an ally of creation, as with some of the design features in nature.
Related Articles
Further Reading
Related Media
1. LaMont, A., Sir Isaac Newton (1642/3 1727): A Scientific Genius, Creation 12(3):48–51, 1990; Return to text.
2. See Sarfati, J., Newton was a creationist only because there was no alternative? (response to critic), 29 July 2002. The critic I was replying to later wrote thanking CMI for the response, and to say that he no longer agreed with the sentiments of his original letter. He was happy for his original letter and response to remain as a teaching point for others who might need correcting. Return to text.
3. Michelson, A.A., Light Waves And Their Uses, pp. 23–25, University of Chicago Press, 1903. Return to text.
4. Lamont, A., James Clerk Maxwell, Creation 15(3):45–47, 1993; Maxwell argued that an oscillating electrical field would generate an oscillating magnetic field, which in turn would generate an oscillating electrical field, and so on. Thus it would be related to the core electromagnetic constants: the permittivity (ε0) and permeability (µ0) of free space, which relate the strengths of electric and magnetic attractions. E.g. Coulomb’s Law is F =-1/(4πε0) q1q2 ⁄ r². Maxwell showed that this radiation would propagate at a speed c² = 1 ⁄ ε0µ0. When the speed of light was found to match this, Maxwell deduced that light must be an electromagnetic wave. Einstein reasoned that since permittivity and permeability are constant for every observer, the speed of light must also be invariant, and instead time and length vary. Return to text.
5. Call for new Nobel prizes to honour ‘forgotten’ scientists, 30 September 2009, archived at Return to text.
6. Except for the 2006 Nobel Prize for physics, which involved proof of the unobserved big bang involving unobserved dark matter. See Sarfati, J., Nobel Prize for alleged big bang proof,, 7–8 October 2006. Return to text.
7. Sarfati, J., Beautiful black and blue butterflies, J. Creation 19(1):9–10, 2005; to text.
8. After John William Strutt, 3rd Baron Rayleigh, OM (1842–1919) and James Hopwood Jeans (1877–1946). Return to text.
9. h = 6.62606957(29)×10−34 J.s. Return to text.
10. Return to text.
11. Galison, P., “Kuhn and the Quantum Controversy”, British J. Philosophy of Science 32(1): 71–85, 1981 P-I-P-E doi:10.1093/bjps/32.1.71 Return to text.
12. Seeger, R., Planck: Physicist, J. American Scientific Affiliation 37:232–233, 1985. Return to text.
13. Einstein, A. Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt (On a Heuristic Viewpoint Concerning the Production and Transformation of Light), Annalen der Physik 17(6):132–148, 1905 P-I-P-E doi:10.1002/andp.19053220607. Return to text.
14. Return to text.
15. From phōs (φῶς) light and ōn (ὢν) = being/one. Return to text.
16. Return to text.
17. Einstein, A., Zur Quantentheorie der Strahlung (On the Quantum Mechanics of Radiation), Physikalische Zeitschrift 18:121–128, 1917. Return to text.
18. Return to text.
19. Return to text.
20. Recherches sur la théorie des quanta (Research on the Theory of the Quanta). Return to text.
21. Return to text.
22. Return to text.
23. de Broglie, L., The reinterpretation of wave mechanics, Foundations of Physics 1(1), 1970. Return to text.
24. Return to text.
25. Return to text.
26. Return to text.
27. Heisenberg, W., Die Physik der Atomkerne, Taylor & Francis, 1952, p. 30. Return to text.
28. Einstein, A.; Stern, O., Einige Argumente für die Annahme eine molekular Agitation bein absoluten Nullpunkt (Some arguments in support of the assumption of molecular vibration at the absolute zero), Ann. del Physick 4:551–560, 1913. Return to text.
29. Compare Sarfati, J. Loving God with all your mind: logic and creation, J. Creation 12(2):142–151, 1998; Return to text.
30. Holland, P.R., The Quantum Theory of Motion: An Account of the de Broglie–Bohm Causal Interpretation, Cambridge University Press, 1993. Figure 5.7 on page 184, for example, shows the possible paths of a particle going through the two-slit experiment. Return to text.
31. Zbinden, H. et al., Experimental test of relativistic quantum state collapse with moving reference frames, J. Phys. A: Math. Gen. 34:7103, 2001 P-I-P-E doi:10.1088/0305-4470/34/35/334 Return to text.
32. Mawdsley, A., Trodahl, H.J., Tallon, J., Sarfati, J.D., and Kaiser, A.B., Thermoelectric power and electron-phonon enhancement in YBa2Cu3O7-δ, Nature 328(6127):233–234, 16 July 1987. Return to text.
33. Turin, L., A spectroscopic mechanism for primary olfactory reception, Chemical Senses 21:773, 1996. Return to text.
34. See also Turin, L., The Secret of Scent: Adventures in Perfume and the Science of Smell, 2006. Return to text.
35. Sarfati, J., Olfactory design: smell and spectroscopy, J. Creation 12(2):137–138, 1998; Return to text.
36. See for example Sarfati, J., By Design, ch. 5: Orientation and navigation, CBP, 2008. Return to text.
37. Ritz, T., et al., Resonance effects indicate a radical-pair mechanism for avian magnetic compass, Nature 429:177–180, 13 May 2004 P-I-P-E doi:10.1038/nature02534 Return to text.
38. Gauger, E.M. et al., Sustained Quantum Coherence and Entanglement in the Avian Compass, Physical Rev. Lett. 106: 040503, 2011 P-I-P-E doi: 10.1103/PhysRevLett.106.040503. Return to text.
39. See also Wile, J., Birds Use Quantum Mechanics to Navigate?, 26 March 20l1. Return to text.
40. Hildner, R. et al., Quantum coherent energy transfer over varying pathways in single light-harvesting complexes, Science 340:1448–1451, 2013 | doi:10.1126/science.1235820. See also Wile, J., “Ancient” Bacteria Use Quantum Mechanics!, 11 July 2013. Return to text.
Comments closed
Article closed for commenting.
Only available for 14 days from appearance on front page.
Readers’ comments
Graham P., New Zealand
Magnificent: A very useful précis of quantum physics history; extremely well written.
David H., UK
This is an excellent brief survey of QM and its history, with some sensible lessons for creationists, and includes some useful examples of recent discoveries. Fascinating, even for someone like me with a background in physical sciences and electronics.
Andrei T., Canada
Thank you so much. My exams are starting this December and I have to know QM for chemistry! My textbook is pretty ‘thick’ on this subject, so this is a great opportunity to study it from different angles!
John T., Canada
A terrific article; Dr. Sarfati is very good on scientific issues.
Philip C., USA
Very nice article! It does a great job of discussing the history and the main issues. The acceptance of QM is the acceptance of a theory that has very good explanatory power. I especially like the statement “But many of the creationist critics of QM confuse QM with interpretations of QM.” I have indeed found this to be true on many occasions. My area of research is fluorescence spectroscopy and computational chemistry. Electronic structure theory and QM play a big role in everyday life for me. I whole heartedly support your article and think it does a great job at explaining the issues. It should be kept in mind that no scientist accepts QM blindly. We are always told and always work within the framework of—this is a useful idea that has great explanatory power; the implications can seem strange at times but this is just a theory that allows us to make a lot of sense of what we observe. We are not obligated to swallow down every interpretation and every oddity to use and support the theory. QM is quite elegant and supremely useful, it makes sense of the data and observations and has led to many real advances in science! This should not be swept under the rug because we “don’t like” a particular interpretation of it. Thanks again Dr. Sarfati and keep up the good work. I support your work even if it is a little more vibrational and not as optical as I prefer!! God Bless!
Nick W., Australia, 31 May 2012
Very helpful, very informative article. I especially appreciated the explanation of Schrödinger’s Cat as a reductio ad absurdum, pointing out that the Copenhagen interpretation necessarily implies violations of the law of non-contradiction.
This was helpful because it allows us to then move on to the causal interpretation. Up until this point I had only heard QM spoken of in terms of the copenhagen interpretation and it seemed like madness. I therefore sympathise with those Christians who instinctively oppose QM in an effort to maintain the law of non-contradiction.
Thank you so much for clarifying the difference between the observations and the interpretations.
Comments closed
Article closed for commenting.
Only available for 14 days from appearance on front page.
Copied to clipboard
Product added to cart.
Click store to checkout.
In your shopping cart
Remove All Products in Cart
Go to store and Checkout
Go to store |
aff206998361c00e | Period 1 element
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A period 1 element is an element in the first period (row) of the periodic table. The periodic table is arranged in rows to show repeating properties of the elements. When the atomic number increases, the element have different properties. A new row begins when chemical properties repeat. It means that elements in the same group have similar properties. The first period has less elements than any other periods in the periodic table. There are only two elements in the first period: hydrogen and helium. We can explain why there is less elements in the first row in modern theories of atomic structure. This is because in quantum physics, this period fills up the 1s orbital. Period 1 elements follows the duet rule, they only need two electrons to complete their valence shell. These elements can only hold two electrons, both in the 1s orbital. Therefore, period 1 can have only two elements.
Periodic trends[change | change source]
As period 1 only has two elements, there are no remarkable periodic trends.
Position of period 1 elements in the periodic table[change | change source]
Although both hydrogen and helium are in the s-block, they do not behaves similarly to other s-block elements. There is argument over where these two elements should be placed in the periodic table.
Hydrogen[change | change source]
The position of hydrogen is sometimes above lithium,[1] sometimes above carbon,[2] sometimes above fluorine,[2][3] sometimes above both lithium and fluorine (appearing two times),[4] or float above the other elements and not belongs to any group[4] in the periodic table.
Helium[change | change source]
The position of helium is almost always above neon (which is in the p-block) in the periodic table because it is a noble gas.[1] However, sometimes the position of it is above beryllium because they have similar electron configuration.[5]
Elements in period 1[change | change source]
Chemical element Chemical series Electron configuration
1 H Hydrogen Nonmetal 1s1
2 He Helium Noble gas 1s2
Hydrogen[change | change source]
Hydrogen discharge tube
Deuterium discharge tube
Hydrogen (symbol:H) is a chemical element. Its atomic number is 1. At standard temperature and pressure, hydrogen has no color, no smell and no taste. It belongs to nonmetal, and it is highly flammable. It is a diatomic gas with the molecular formula H2. Its atomic mass is 1.00794 amu, making hydrogen the lightest element.[6]
Hydrogen is the most abundant of the chemical elements. The abundance of hydrogen is roughly 75%.[7] Stars in the main sequence are mainly composed of hydrogen in its plasma state. However, there are less hydrogen on Earth. Therefore, hydrogen is industrially produced from hydrocarbons (e.g. methane). We use elemental hydrogen locally at the production site. The largest markets almost equally divided between fossil fuel upgrading, such as hydrocracking, and ammonia production, mostly for the fertilizer market. Hydrogen may be produced from water using the process of electrolysis, but this process is significantly more expensive commercially than hydrogen production from natural gas.[8]
The most common naturally occurring isotope of hydrogen, known as protium, has a single proton and no neutrons.[9] In ionic compounds, it can take on either a positive charge, becoming a cation composed of a bare proton, or a negative charge, becoming an anion known as a hydride. Hydrogen can form compounds with most elements and is present in water and most organic compounds.[10] It plays a particularly important role in acid-base chemistry, in which many reactions involve the exchange of protons between soluble molecules.[11] As the only neutral atom for which the Schrödinger equation can be solved analytically, study of the energetics and spectrum of the hydrogen atom has played a key role in the development of quantum mechanics.[12]
The interactions of hydrogen with various metals are very important in metallurgy, as many metals can suffer hydrogen embrittlement,[13] and in developing safe ways to store it for use as a fuel.[14] Hydrogen is highly soluble in many compounds composed of rare earth metals and transition metals[15] and can be dissolved in both crystalline and amorphous metals.[16] Hydrogen solubility in metals is influenced by local distortions or impurities in the metal crystal lattice.[17]
Helium[change | change source]
Helium discharge tube
Helium (He) is a colorless, odorless, tasteless, non-toxic, inert monatomic chemical element that heads the noble gas series in the periodic table and whose atomic number is 2.[18] Its boiling and melting points are the lowest among the elements and it exists only as a gas except in extreme conditions.[19]
Helium was discovered in 1868 by French astronomer Pierre Janssen, who first detected the substance as an unknown yellow spectral line signature in light from a solar eclipse.[20] In 1903, large reserves of helium were found in the natural gas fields of the United States, which is by far the largest supplier of the gas.[21] The substance is used in cryogenics,[22] in deep-sea breathing systems,[23] to cool superconducting magnets, in helium dating,[24] for inflating balloons,[25] for providing lift in airships,[26] and as a protective gas for industrial uses such as arc welding and growing silicon wafers.[27] Inhaling a small volume of the gas temporarily changes the timbre and quality of the human voice.[28] The behavior of liquid helium-4's two fluid phases, helium I and helium II, is important to researchers studying quantum mechanics and the phenomenon of superfluidity in particular,[29] and to those looking at the effects that temperatures near absolute zero have on matter, such as with superconductivity.[30]
Helium is the second lightest element and is the second most abundant in the observable universe.[31] Most helium was formed during the Big Bang, but new helium is being created as a result of the nuclear fusion of hydrogen in stars.[32] On Earth, helium is relatively rare and is created by the natural decay of some radioactive elements[33] because the alpha particles that are emitted consist of helium nuclei. This radiogenic helium is trapped with natural gas in concentrations of up to seven percent by volume,[34] from which it is extracted commercially by a low-temperature separation process called fractional distillation.[35]
References[change | change source]
1. 1.0 1.1 "International Union of Pure and Applied Chemistry > Periodic Table of the Elements". IUPAC. Retrieved 2011-05-01.
2. 2.0 2.1 Cronyn, Marshall W. (August 2003). "The Proper Place for Hydrogen in the Periodic Table". Journal of Chemical Education 80 (8): 947–951. doi:10.1021/ed080p947.
3. Vinson, Greg (2008). "Hydrogen is a Halogen". Retrieved January 14, 2012.
4. 4.0 4.1 Kaesz, Herb; Atkins, Peter (November–December 2003). "A Central Position for Hydrogen in the Periodic Table". Chemistry International (International Union of Pure and Applied Chemistry) 25 (6): 14. Retrieved January 19, 2012.
5. Winter, Mark (1993–2011). "Janet periodic table". WebElements. Retrieved January 19, 2012.
6. "Hydrogen – Energy". Energy Information Administration. Retrieved 2008-07-15.
7. Palmer, David (November 13, 1997). "Hydrogen in the Universe". NASA. Retrieved 2008-02-05.
8. Staff (2007). "Hydrogen Basics — Production". Florida Solar Energy Center. Retrieved 2008-02-05.
9. Sullivan, Walter (1971-03-11). "Fusion Power Is Still Facing Formidable Difficulties". The New York Times.
10. "hydrogen". Encyclopædia Britannica. (2008).
11. Eustis, S. N.; Radisic, D; Bowen, KH; Bachorz, RA; Haranczyk, M; Schenter, GK; Gutowski, M (2008-02-15). "Electron-Driven Acid-Base Chemistry: Proton Transfer from Hydrogen Chloride to Ammonia". Science 319 (5865): 936–939. doi:10.1126/science.1151614. PMID 18276886.
12. "Time-dependent Schrödinger equation". Encyclopædia Britannica. (2008).
13. Rogers, H. C. (1999). "Hydrogen Embrittlement of Metals". Science 159 (3819): 1057–1064. doi:10.1126/science.159.3819.1057. PMID 17775040.
14. Christensen, C. H.; Nørskov, J. K.; Johannessen, T. (July 9, 2005). "Making society independent of fossil fuels — Danish researchers reveal new technology". Technical University of Denmark. Retrieved 2008-03-28.
15. Takeshita, T.; Wallace, W.E.; Craig, R.S. (1974). "Hydrogen solubility in 1:5 compounds between yttrium or thorium and nickel or cobalt". Inorganic Chemistry 13 (9): 2282–2283. doi:10.1021/ic50139a050.
16. Kirchheim, R.; Mutschele, T.; Kieninger, W (1988). "Hydrogen in amorphous and nanocrystalline metals". Materials Science and Engineering 99: 457–462. doi:10.1016/0025-5416(88)90377-1.
17. Kirchheim, R. (1988). "Hydrogen solubility and diffusivity in defective and amorphous metals". Progress in Materials Science 32 (4): 262–325. doi:10.1016/0079-6425(88)90010-2.
18. "Helium: the essentials". WebElements. Retrieved 2008-07-15.
19. "Helium: physical properties". WebElements. Retrieved 2008-07-15.
20. "Pierre Janssen". MSN Encarta. Retrieved 2008-07-15.
21. Theiss, Leslie (2007-01-18). "Where Has All the Helium Gone?". Bureau of Land Management. Retrieved 2008-07-15.
22. Timmerhaus, Klaus D. (2006-10-06). Cryogenic Engineering: Fifty Years of Progress. Springer. ISBN 0-387-33324-X.
23. Copel, M. (September 1966). "Helium voice unscrambling". Audio and Electroacoustics 14 (3): 122–126. doi:10.1109/TAU.1966.1161862.
24. "helium dating". Encyclopædia Britannica. (2008).
25. Brain, Marshall. "How Helium Balloons Work". How Stuff Works. Retrieved 2008-07-15.
26. Jiwatram, Jaya (2008-07-10). "The Return of the Blimp". Popular Science. Retrieved 2008-07-15.
27. "When good GTAW arcs drift; drafty conditions are bad for welders and their GTAW arcs.". Welding Design & Fabrication. 2005-02-01.
28. Montgomery, Craig (2006-09-04). "Why does inhaling helium make one's voice sound strange?". Scientific American. Retrieved 2008-07-15.
29. "Probable Discovery Of A New, Supersolid, Phase Of Matter". Science Daily. 2004-09-03. Retrieved 2008-07-15.
30. Browne, Malcolm W. (1979-08-21). "Scientists See Peril In Wasting Helium; Scientists See Peril in Waste of Helium". The New York Times.
31. "Helium: geological information". WebElements. Retrieved 2008-07-15.
32. Cox, Tony (1990-02-03). "Origin of the chemical elements". New Scientist. Retrieved 2008-07-15.
33. "Helium supply deflated: production shortages mean some industries and partygoers must squeak by.". Houston Chronicle. 2006-11-05.
34. Brown, David (2008-02-02). "Helium a New Target in New Mexico". American Association of Petroleum Geologists. Retrieved 2008-07-15.
35. Voth, Greg (2006-12-01). "Where Do We Get the Helium We Use?". The Science Teacher.
Further reading[change | change source] |
edf1f26c2067f5cd |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
When solving Schrödinger's equation for a 3D quantum well with infinite barriers, my reference states that: $$\psi(x,y,z) = \psi(x)\psi(y)\psi(z) \quad\text{when}\quad V(x,y,z) = V(x) + V(y) + V(z) = V(z).$$ However, I cannot find any rationale for this statement. It may be obvious, but I would appreciate any elucidation.
share|cite|improve this question
up vote 6 down vote accepted
It's because when $$V(x,y,z) = V_x(x) + V_y(y) + V_z(z),$$ (I guess that your extra identity $V(x,y,z)=V(z)$ is a mistake), we also have $$ H = H_x + H_y + H_z$$ because $H = (\vec p)^2 / 2m + V(x,y,z) $ and $(\vec p)^2 = p_x^2+p_y^2+p_z^2$ decomposes to three pieces as well.
One may also see that the terms such as $H_x\equiv p_x^2/2m+V_x(x)$ commute with each other, $$ [H_x,H_y]=0 $$ and similarly for the $xz$ and $yz$ pairs. That's because the commutators are only nonzero if we consider positions and momenta in the same direction ($x$, $y$, or $z$).
At the end, we want to look for the eigenstates of the Hamiltonian $$ H|\psi\rangle = E |\psi \rangle$$ and because we have $H = H_x+H_y+H_z$, a Hamiltonian composed of three commuting pieces, we may simultaneously diagonalize them i.e. look for the common eigenstates of $H_x,H_y,H_z$, and therefore also $H$. So given the separation condition for the potential, we may also assume $$ H_x |\psi\rangle = E_x |\psi\rangle $$ and similarly for the $y,z$ components. However, the equation above is just a 1-dimensional problem that implies that $|\psi\rangle$ must depend on $x$ as a one-dimensional quantum mechanical energy eigenstate wave function, $$ \psi(x) = C\cdot \psi_n(x) $$ which is an eigenstate of $H_x$. This has to hold but the normalization factor is undetermined. We usually say that it's a constant but this statement only means that it is independent of $x$. In reality, it may depend on all observables that are not $x$ such as $y,z$. So a more accurate implication of the $H_x$ eigenstate equation is $$ \psi(x,y,z) = C_x(y,z)\cdot \psi_{n_x}(x) .$$ In a similar way, we may show that $$ \psi(x,y,z) = C_y(x,z)\cdot \psi_{n_y}(y) $$ and $$ \psi(x,y,z) = C_z(x,y)\cdot \psi_{n_z}(z) $$ and by combining these three formulae, we see that the whole function must factorize to a product of functions of $x$ and $y$ and $z$ separately. If you need a rigorous proof of the last simple step, take e.g. the complex logarithms of the three forms for $\psi$ above and compare e.g. the first pair: $$\ln\psi = \ln C_x(y,z) +\ln\psi_{n_x}(x) = \ln C_y(x,z)+\ln \psi_{n_y}(y) $$ Take e.g. the partial derivative of the last equation with respect to $y$: $$ \frac{\partial \ln C_x(y,z)}{\partial y} = \frac{\partial \ln\psi_{n_y}(y) }{ \partial y }$$ The other two (1+1) terms are zero because they didn't depend on $y$. The right hand side above only depends on $y$, so the same must be true for the left hand side. I am going to make a simple conclusion but to make it really transparent, let's differentiate the latter equation over $z$, too. The $\psi_{n_y}$ term disappears as well so we have $$\frac{\partial^2 \ln C_x(y,z)}{\partial y\,\partial z} = 0$$ It means that $\ln C_x(y,z)$ must have the form $K_x(y)+L_x(z)$, and $e^{K_x(y)}e^{L_x(z)}$ must be the remaining factors in the wave function.
We say that the wave function in the product form is a "tensor product" of the three independent one-dimensional wave functions and more "operationally", as another user mentioned, the method described above is the method of "separation of variables".
share|cite|improve this answer
Great answer, thanks! (The V(x,y,z,) = V(z) wasn't a mistake, but the special case for 1D QW.) – Halftrack Oct 23 '12 at 12:24
This method is called "separation of variables", and it is one of several strategies for finding solutions to multi-dimensional field problems in physics.
It's big advantage is that solving N 1-dimensional differential equations is generally easier than solving 1 N-dimensional problems (N>1), but it is contingent on there being a "uniqueness theorem" for the category of problems that you are looking at. Happily many common field problems in physics have such a theorem.
To show that the condition on the potential is required for the Schrödinger equation simply make the separation, and expand. If the potential has the above form, you can write the LHS of the equation in three terms: $$ \left( \frac{-\hbar^2}{2m} \frac{\partial^2}{\partial x} \psi(x) + V(x) \right)\psi(y)\psi(z) $$ and you can clearly rewrite the whole into three separate conditions like $$ \left( \frac{-\hbar^2}{2m} \frac{\partial^2}{\partial x} \psi(x) + V(x) \right) = i\hbar{}\frac{\partial}{\partial t} \psi(x) . $$ On the other had, if the potential can not be written in this way, you can not get the LHS into the form you need and you can not proceed along these lines.
share|cite|improve this answer
Let me just emphasize that you can find a lot of solutions of the equation that do not satisfy the statement in the question (and cannot be presented as such products), but you can use this statement to find the basis of the set of the solutions of the equation. So any solution can be presented as a linear combination of functions satisfying the statement.
share|cite|improve this answer
How do you reconcile this with the answer of @lubos-motl? Can you give any examples? – Halftrack Oct 23 '12 at 12:28
Dear akhmeteli and @Halftrack, akhmeteli must mean lots of solutions to the time-dependent Schrodinger equation: solutions may be obtained as linear superpositions of the separated solutions with different energies $E$. For the time-independent "eigenvalue" equation with a single well-defined energy eigenvalue, the separation of variables lists all the solutions (unless there is degeneracy in the spectrum). – Luboš Motl Oct 23 '12 at 16:11
@Halftrack and Luboš Motl: Judging by Luboš Motl's comment, he agrees that the statement is not always satisfied even for the time-independent Schrödinger equation if there is degeneracy. Let me also add that it is not quite obvious from your (Halftrack's) question that it was the time-independent Schrödinger equation that you had in mind, at least I did miss that. – akhmeteli Oct 24 '12 at 0:46
I was only looking to solve the time independent Schrödinger equation, but you are absolutely correct: my question does not state so. If there is degeneracy, how would that break the derivation by @lubos-motl? – Halftrack Oct 24 '12 at 1:03
@Halftrack: I did not study the derivation in detail, but it seems OK. It is important though to understand what he derived. As far as I can judge, he proved that there is a set of eigenstates of the Hamiltonian in the form of your statement (products of three functions), and any eigenstate is a linear combination of eigenstates from that set. However, when the spectrum is degenerate, there are at least two different eigenstates in that set having the same eigenvalue, and linear combinations of such eigenstates are also eigenstates, but they do not have to have the form of your statement. – akhmeteli Oct 24 '12 at 1:49
Your Answer
|
2a8b4ae7318bdd05 | Semiclassical limit of Quantum Mechanics | PhysicsOverflow
• Register
Please help promote PhysicsOverflow ads elsewhere if you like it.
New printer friendly PO pages!
Migration to Bielefeld University was successful!
Please vote for this year's PhysicsOverflow ads!
... see more
Tools for paper authors
Submit paper
Claim Paper Authorship
Tools for SE users
Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post
Public \(\beta\) tools
Report a bug with a feature
Request a new functionality
404 page design
Send feedback
(propose a free ad)
Site Statistics
180 submissions , 140 unreviewed
4,526 questions , 1,815 unanswered
5,153 answers , 21,929 comments
1,470 users with positive rep
715 active unimported users
More ...
Semiclassical limit of Quantum Mechanics
+ 6 like - 0 dislike
I find myself often puzzled with the different definitions one gives to "semiclassical limits" in the context of quantum mechanics, in other words limits that eventually turn quantum mechanics into classical mechanics.
In a hand-wavy manner
• Classical or semiclassical limit corresponds to the limit of taking $\hbar \to 0.$
• Often when talking about the correspondence principle, the semiclassical limit is obtained in the limit of large quantum numbers (large orbits and energies).
More precisely
• Exemplary source of confusion: One way to show why $\hbar \to 0$ describes a classical limit, is as follows:
Take the $1D$ Schrödinger equation for a particle of mass $m$ in a potential $V(\vec x)$:
\begin{equation} i\hbar \frac{\partial}{\partial t}\psi(\vec x,t) = \left[-\frac{-\hbar^2}{2m}\vec \nabla^2+V(\vec x)\right]\psi(\vec x,t) \end{equation}
By inserting $\psi(\vec x,t)=e^{iS(\vec x,t)/\hbar}$ in the Schrödinger equation above, and simplifying for $\psi$, we obtain:
$$ -\frac{\partial S}{\partial t}=\frac{1}{2m}(\vec\nabla S)^2-\frac{i\hbar}{2m}(\vec \nabla^2S)+V $$ Now taking $\hbar \to 0$, the above just becomes the classically well known Hamilton-Jacobi equation, where $S$ describes Hamilton's principal function or the action:
$$ -\frac{\partial S}{\partial t}=\frac{1}{2m}(\vec\nabla S)^2+V $$ Using such result, then we can use an $\hbar$ expansion of $S$ in the second equation. Unfortunately I fail to see why reaching the Hamilton-Jacobi equation necessarily implies a classical behavior!
1. Alternatively one talks about classical limits of QM by saying: When Planck's quantum $\hbar$ becomes very small compared with values of the Lagrangian action integral (Feynman's path integral formalism). I probably shouldn't ask this (since the discussion is rather vague here), but is there any neat way of demonstrating the above idea mathematically? (e.g. by showing whether such limit necessarily leads to quantum decoherence and hence the classical trajectories become dominant.)
2. Finally, are the two statements of $\hbar \to 0$ and taking the limit of high quantum numbers somehow equivalent? (i.e. a reformulation of one another?)
Of course any other ways (whether physical or mathematical) of thinking about and understanding semiclassical limits of quantum mechanics are also welcomed as answer.
This post imported from StackExchange Physics at 2015-11-01 21:06 (UTC), posted by SE-user Phonon
asked Oct 14, 2014 in Theoretical Physics by Phonon (50 points) [ no revision ]
Possible duplicates: physics.stackexchange.com/q/17651/2451 , physics.stackexchange.com/q/32112/2451 , physics.stackexchange.com/q/33767/2451 , physics.stackexchange.com/q/56151/2451 and links therein.
2 Answers
+ 5 like - 0 dislike
First, the classical and semiclassical adjectives are not quite synonyma. "Semiclassical" means a treatment of a quantum system whose part is described classically, and another part quantum mechanically. Fields may be classical, particle positions' inside the fields quantum mechanical; metric field may be classical and other matter fields are quantum mechanical, and so on.
Also, we often treat the "quantum part" of the semiclassical treatment in another approximation – where we take the leading classical behavior plus the first quantum correction only. For this part of the system, the "semiclassical" therefore means "one-loop approximation" (like in the WKB approximation).
Now, the laws of quantum mechanics may be shown to imply the laws of classical physics for all the "classical questions" whenever $\hbar\to 0$. More properly, $\hbar\to 0$ indeed means $J / \hbar \to \infty$ for all the normal "angular momenta" $J$, actions $S$ (instead of $J$), and everything else with the same units. So yes, indeed, the $\hbar\to 0$ classical limit and the limit of the large quantum numbers is the same thing. It is not kosher to ask whether a dimensionful quantity such as $\hbar$ is much smaller than one; whether the numerical value is small depends on the units. So we must make these claims about "very small" or "very large" dimensionless, and that's why we need not just $\hbar$ but also $J$ or $S$ of the actual problem, and that's why all the inequalities dictating the classical limit that you mentioned are equivalent.
In this limit, the spectra become so dense that the observables (such as the energy of the hydrogen atom) are effectively continuous even though they are discrete in the exact quantum treatment. The Heisenberg equations of motion for the operators reduce to the classical equations of motion. Decoherence guarantees that with some environment, the diagonal entries of the density matrix may be interpreted as classical probabilities, and the off-diagonal ones quickly go to zero. We may always imagine that the wave functions in this limit are "narrow packets" whose width is negligible and whose center moves according to the classical equations. It just works.
One should understand all aspects of this proof that "classical physics is a limit of quantum mechanics", what it assumes, how we should ask the questions and translate them from one formalism to another, and so on. But at the end, the fact that this statement holds is more important than some technical details of the proof.
Historically, the Hamilton-Jacobi equation is a way to describe classical physics because it was discovered and shown equivalent to classical physics long before the quantum theory was first encountered. Mathematically, you can see that the Hamilton-Jacobi equation only contains the quantities we may actually measure with classical apparatuses such as $S,t,V,m$ etc. and it doesn't depend on $\hbar$ at all – even if you use the SI units, for example – which proves that the equation is independent of quantum mechanics.
There are lots of things to say about the classical limit of quantum mechanics and some more specific classes of quantum mechanical theories, see e.g.
answered Oct 17, 2014 by Luboš Motl (10,278 points) [ no revision ]
So well written and answered...+1
This post imported from StackExchange Physics at 2015-11-01 21:07 (UTC), posted by SE-user user929304
+ 2 like - 0 dislike
There are results that are mathematically rigorous concerning the semiclassical limit of quantum theories. It is in fact an ongoing and interesting theme of research in mathematical physics. However you need to be rather well versed in analysis to understand the results. The bibliography is quite huge, but I would like to mention the following (some quite old) results:
Finite dimensional phase space (quantum mechanics):
• Hepp 1974 Coherent states method.
• Helffer, Martinez and Robert 1987, in french. Uses the so-called Wigner measure approach.
• Figalli, Ligabò, Paul 2010. Modern approach to Wigner measures, dealing with rough potentials.
Infinite dimensional phase space (bosonic QFT)
• Ginibre and Velo 1979 Extension of the work of Hepp to infinite dimensions.
• Ammari and Nier 2007 Infinite dimensional Wigner measures.
• BBGKY hierarchy: review by Golse (mean field limit, that is mathematically equivalent to the semiclassical limit)
Also, these slides of Francis Nier may prove useful for a quick overview of finite and infinite dimensional Wigner measures.
I will not try to explain the ideas, because it would be very technical and very long. Without being more precise, I can tell you that they investigate in a rigorous way the limit $\hslash\to 0$ (or in an equivalent way also when $N\to \infty$, where $N$ is the number of particles on the system), to prove that the linear unitary quantum dynamics reduces, in the limit, to the nonlinear classical dynamics. Sorry but I do not have time to say more than that ;-)
answered Oct 14, 2014 by yuggib (360 points) [ no revision ]
Your answer
Live preview (may slow down editor) Preview
Your name to display (optional):
Anti-spam verification:
user contributions licensed under cc by-sa 3.0 with attribution required
Your rights |
ad1fc39c2fa9e961 | On causality
Both camps have some good reasons on their side. It is true that the most basic equations in physics are time symmetric, so that causality doesn’t enter into them. But it is also unquestionably true that we have to somehow explain the arrow of time and the fact that things do very much appear to happen one after the other. While we move freely back and forth the three spatial dimensions, we definitely don’t do that along the fourth, temporal, dimension.
Three possible solutions to this conundrum are: I) to say that causality is an “illusion,” part and parcel of the manifest image, but not really a scientifically viable concept; or II) to claim that causality somehow emerges from basic physics (whatever “emergence,” a philosophically controversial concept, means); or III) to argue that causality is fundamental and that there is something incomplete about quantum mechanics and general relativity, and that’s why it needs to be “added by hand,” so to speak, in order to describe how the world actually works.
It isn’t entirely clear what this view does with respect to causality, and it doesn’t seem to explain why we feel like time is something very different from space. Moreover, it doesn’t explain, say, the manifest image-level difference between causation and correlation. None of this means that the block universe concept of time/causality is wrong, but it does mean that there are serious pieces of the puzzle still missing.
Lee Smolin has a very different idea of time, and therefore of causality, as I have explained in detailed in the past. For him quantum mechanics and relativity are indeed incomplete (on this everyone seems to agree, including string theorists, who vehemently reject Smolin’s approach), time is fundamental, and so is causality. Indeed, he goes as far as saying that the laws of nature emerge from the specifics of causal interactions at the fundamental level, not the other way around.
In philosophy too, causality has always been a messy business. Famously, according to David Hume, it is something we add onto our perception of the fabric of the universe, and that may not be inherent in it. As the excellent Internet Encyclopedia of Philosophy article on Hume and causality puts it: “Whenever we find A, we also find B, and we have a certainty that this conjunction will continue to happen. Once we realize that ‘A must bring about B’ is tantamount merely to ‘Due to their constant conjunction, we are psychologically certain that B will follow A,’ then we are left with a very weak notion of necessity. This tenuous grasp on causal efficacy helps give rise to the Problem of Induction — that we are not reasonably justified in making any inductive inference about the world.”
However, it is not at all clear whether Hume thought that this is all there is to causality, or rather simply all that an empiricist approach to causality allows us to say, and Hume scholars disagree on this point.
Modern philosophers have developed a number of different theories of causation (and of time), that attempt to take into account what we have learned from science, and particularly physics, and make sense of it. It’s not an easy task, to put it mildly.
One of my favorite modern ways of thinking about causality (though, of course, it has its critics and drawbacks) is the co-called conserved quantity theory of causation. Here are the two major versions, according to the Stanford Encyclopedia of Philosophy (if you keep reading that article, you will also see a number of standard objections raised against it, the proposed responses, etc.):
P. Dowe’s version (1995, p. 323):
CQ2. A causal process is a world line of an object which possesses a conserved quantity.
W. Salmon’s version:
Definition 1. A causal interaction is an intersection of world-lines that involves exchange of a conserved quantity.
Definition 2. A causal process is a world-line of an object that transmits a nonzero amount of a conserved quantity at each moment of its history (each spacetime point of its trajectory).
Definition 3. A process transmits a conserved quantity between A and B (A ? B) if it possesses [a fixed amount of] this quantity at A and at B and at every stage of the process between A and B without any interactions in the open interval (A, B) that involve an exchange of that particular conserved quantity.
Here is a list of universally conserved properties in interactions between elementary particles:
• energy
• linear momentum
• angular momentum
• electric charge
• baryon number
• electron-muon-tauon number
• lepton number
All of this, of course, has profound implications for both science and philosophy, but also for the way we should think about the world, i.e., these considerations affect both our scientific and our manifest images of the world.
Recently, I’ve began to think of causality as somewhat similar, in its manifestations, to physical forces, such as gravity. While gravity is universal, meaning that it acts in every point of the universe, so that in theory we are subject to the gravitational pull of every body in the cosmos that has mass, in practice we only need to be concerned with the gravitational effects induced by sufficiently massive bodies laying close enough to us. Our everyday life is affected by the gravity of Earth, the Moon, and the Sun, and little else. You need not worry about the gravitational pull of, say, the Andromeda galaxy because, even though it’s huge, the thing is so far from us that its orbital period is billions of years, so that it has no measurable effect on your existence. You also don’t need to concern yourself with the gravitational influence of people around you, because while they are nearby, their mass is just too small to do anything of consequence to you.
Perhaps causality is like that: while it makes sense to think of cause and effect as a universal phenomenon, with everything connected to everything else, for any practical purpose we are free to take into account only local causal interactions, all the other ones being dampened or overridden so to become irrelevant. It remains to be seen what such view would do to radical metaphysical notions like universal determinism (and consequent reductionism), or to controversial ones such as top-down causation (and consequent anti-reductionism).
You would think that this is an obvious area of inquiry where scientists and philosophers should come together. It isn’t, in my opinion, simply a matter of letting science tells us how things really stand. For one thing, because I’m confident that a fundamental physicist, a non-fundamental one, a biologist, and a social scientist would have very different views of what “science tells us” (indeed, as I mentioned above, even fundamental physicists vehemently disagree among themselves, so…).
Nor, of course, is it a question of calling the philosophical cavalry to explain to the naive scientists how they ought to think about the matter. That would be presumptuous to the level of silliness.
But why isn’t the question of time, or that of causality, a straightforward scientific issue? Why do we need philosophers to begin with?
One answer would be because philosophers have spent literally centuries thinking about these issues, much more so than scientists, and so there is likely something to learn from the best proposals they have put forward so far.
But that’s not actually it, or at the least, it isn’t the whole story. I think time and causality are a perfect example of the power of “sciphi,” if you will, because the issue isn’t just one of discovering facts about time and causality, it is to develop an understanding of these concepts that allow us to keep pursuing Sellars’ overarching objective: “to formulate a scientifically oriented, naturalistic realism which would ‘save the appearances'” [1]. The more I think about it, the more it seems to me that the (or at the least a major) goal of philosophy is precisely to articulate a mapping function that connects the scientific image — which only science can provide us — with the manifest image, which we simply cannot do without as cognitively limited biological creatures of a certain kind (and that includes scientists, obviously).
[1] “Autobiographical Reflections (February 1973),” p. 289 in Action, Knowledge, and Reality: Studies in Honor of Wilfrid Sellars, H-N. Castañeda (ed.), Indianapolis: Bobbs-Merrill, 1975: 277-93.
125 thoughts on “On causality
1. Coel
Hi Massimo,
… as far as I understand … the principle [2nd law] doesn’t emerge naturally from QM, which means that the arrow of time has to be added after the fact, so to speak, to the fundamental equations.
Let’s take synred’s scenario of gas molecules in a box, starting with them all in one corner. The fundamental equations of QM are deterministic and time-symmetric. Compute them forward, and the molecules spread out. That is 2nd-law behaviour.
Now, if we kept the deterministic equations, reversed the direction of time, and started from the exact ending points from above, then the molecules would all gather in the corner again. That is anti-2nd-law behaviour and is contrary to how the universe works.
But, now add in some indeterminacy, some dice throwing associated with the “collapse of the wavefunction” or whatever is actually going on. That dice throwing scrambles the effect of the initial conditions, and the gas molecules will again demonstrate 2nd-law behaviour regardless of starting points (since, by definition, dice-throwing indeterminacy is probabilistic, and the 2nd law is probabilistic).
So, to summarise, QM without any indeterminacy does not give rise to the 2nd law; but QM with added dice-throwing indeterminacy does. That’s why I argued in my first comment above that the arrow-of-time behaviour comes from indeterminacy associated with the “collapse of the wavefunction”. The problem is, of course, that this aspect of QM is simply not understood.
Thus physicists such as Sean Carroll will argue for a completely deterministic version of QM, as given by Everettian Many Worlds, and thus land themselves with an arrow-of-time problem.
Now, obviously, when it comes to fundamental physics people should listen to Sean Carroll far more than myself, but I’m still sticking to my version (and Everettian Many Worlds is a minority taste among physicists, though espoused by some big names). I also tried to persuade (the sadly missed) vmarko of this several times, though failed.
2. synred
Hello Massimo:
“I’m aware of that, but as far as I understand (please correct me if I’m wrong, oh physicists!) the principle doesn’t emerge naturally from QM, which means that the arrow of time has to be added after the fact, so to speak, to the fundamental equations. So the problem does exist”
Thermodynamics is my favorite example of ’emergence’. Conceptually it emerges from the statistics of large numbers. Thinking in terms of simulation, if you set up QM problem with a large number of particles/states and run it thermodynamic behavior will happen. To understand why it happens you have to add the concepts of the statistics of large numbers. It is the one case where we can understand ’emergence’ in detail.
Storing a memory whether in chip or brain involves an ‘irreversible’ change (so many interactions you have to change and so many momenta flipped to reverse them that it’s for all practical purposes impossible). Hence, one very rarely remembers he future.
It’s like my particles in the corner example. If I watch them through time they will spread out and fill the box. If I reverse time (which just is reversing all their momenta) they will also spread out and fill the box. This is just because there are a lot more ways to fill the box than to be in a corner. The chances that they will reassemble in a corner are nil, if the number of particles is large. You can’t derive this from staring at the equations — you have to count the states in the specified situation. The equations are time symmetric, the solutions are not.
Time is not emergent (it’s there in field theory or classical mechanics), but it’s direction is.
3. synred
I don’t think ‘many worlds’ has a serous arrow of time problem. To get the particles back in the corner you have to revere time in all the worlds not just yours. It’s hard enough to do that in one world, so even classical mechanics has no practical arrow of time problem. In many worlds it would seem unlikely that somebody would decide to reverse time in all of ’em at the same time . It would be difficult for them to synchronizes there watches. Should I decide to reverse time at a certain time by my watch in whatever world I find myself in, since everything that can happens would find myself unable to in at least some worlds (my finder tunneled into the key board as I wrote up the experimental protocol in some worlds and in others while I searched google for ideas on how to do the time reversal), so it’s not going to work. I need to reverse all the worlds to get back in the corner.
LOL )=
4. Coel
Hi synred,
Well I don’t claim to understand Everettian many-worlds QM, but:
I gather (though am open to correction) that in EQM reversing the arrow of time would reverse time in all the worlds. Afterall, the whole point of EQM is that it’s all one big wave function (with terms decohered and thus added with “+” rather then being entangled), is it not?
Thus, reversing the arrow of time would cause the worlds to “merge” (opposite of “split”), and thus the gas molecules would indeed all end up back in the corner. Therefore you do have a 2nd-law problem. Carroll then solves that by appealing to special initial conditions (i.e. the 2nd-law is then a peculiarity of our universe, resulting from the initial conditions, whereas another universe with different initial conditions could exhibit anti-2nd-law behaviour).
Personally I much prefer an indeterministic account of QM, which then exhibits the 2nd law always and naturally.
5. Disagreeable Me (@Disagreeable_I)
Hi Massimo,
To [m]e mathematics describes really, it is not reality.
Right. And we agree that mathematics can describe reality, by modelling aspects of reality as mathematical objects and using the tools and syntax of mathematics to describe those objects. In other words, we pretend that reality is a mathematical object because this is useful. But just because this is assumed to be a useful fiction doesn’t rule out the possibility that reality actually is a mathematical object. Some mathematical objects do not correspond very closely to anything in reality. This doesn’t stop mathematicians describing those objects too. We shouldn’t confuse the syntax mathematicians use to describe mathematics (a language) with what is described
It emerges naturally from almost any rule based system where state changes bit by bit over time. Let’s take a simple example. Suppose you have a million coins, and the rule of the system is that every clock tick you flip an arbitrary coin. The coin doesn’t have to be chosen truly randomly — you could use a deterministic pseudorandom sequence like the digits of Pi to guide your selection. All that matters really is that the rules governing the choice be neutral with respect to entropy — you’re not deliberately trying to maximise or minimise entropy.
Low entropy would be a remarkable state we would be very unlikely to reach by chance, e.g. where all coins are heads or all coins are tails. High entropy would be the state we would expect to see after the laws have been at work for a long time, e.g. where the coins are split about 50/50. If you start with all coins showing heads, we will observe a tendency for entropy to increase with each clock tick. This will continue until you reach equilibrium and we reach a state where the coins are split approximately 50/50. The initial state of all heads corresponds to the Big Bang and equilibrium corresponds to the heat death of the universe.
The 2nd law of thermodynamics is an inevitable consequence of the fact that the Big Bang was a moment of low entropy, and then the laws of physics proceeded to mess it up (flipping coins). There’s nothing particular to QM to explain this or in need of reconciliation with this. The same would hold in a world without QM, and the same would hold in most worlds with even radically different laws of physics. All you need is an initial low entropy state and then a set of laws which can mess this state about. Almost any laws will do.
I have disagreed with Coel on this stuff. Coel thinks that the irreducible randomness of QM is necessary to explain the second law of thermodynamics, but I think he’s dead wrong on this. I don’t think randomness or QM has much to do with it.
None of which, even though likely true, sounds to me like even the beginning of an explanation for why we perceive time so qualitatively different from space.
Physics does not treat time as interchangeable with space. Time and space have different properties so it is not surprising that we perceive them differently.
What is actually happening when we move in space and time passes?
Strictly speaking, nothing is “happening”. Everything is static. But there are points in spacetime where we think we’re in one place and then “later” points in space time where we think we are somewhere else. We perceive this as having moved from one place to another. It’s like a video. The video is just a thing. It isn’t changing. But the video is made up of an ordered series of events. If we deem some events to be “later” than other events, we can interpret things as moving, even though there isn’t really any motion if we consider the video holistically as a single artifact.
What does it mean for something physical to be mathematically necessary?
Suppose the laws of physics are such and such and suppose that the state of the world at time t is such and such. Given this state and these laws of physics, we can do mathematics to show that the state of the world at time t+1 is determined. It is mathematically necessary.
But only given these laws and this initial state. Neither of these are mathematically necessary, so nothing physical is ultimately mathematically necessary. Necessity only applies given the givens!
Why is it that QM, contra its Newtonian counterpart, doesn’t say that one billiard ball hitting another is necessary at all, but just highly probable?
Because the laws are different. So we don’t have absolute necessity, but merely very high probability. I gave a mathematical example of this too with how 90% of numbers divide by ten the same number of times as their successors.
Liked by 1 person
6. Coel
Hi DM,
I think that non-deterministic dice-throwing is necessary to guarantee 2nd-law behaviour. You could indeed get 2nd-law behaviour with deterministic laws by specifying your initial conditions to get that result. Equally, you could specify initial conditions to get anti-2nd-law behaviour.
7. synred
I prefer indeterministic QM too. Indeed Everett can be reversed formally. Not likely to happen though, so observing an arrow of time doesn’t rule it out (which I would like to do).
I also recently learned that ‘many worlds’ can not derive the Born rule, so they have to add it as an additional assumption just like ‘Copenhagen’. That reduces it appeal even more too me. It doesn’t even has even a vague ‘Occam’ edge. In my ‘many worlds’ simulation I have to add ‘worlds’ rather than ‘collapse’ them (or actually never make ’em) to get the Born rule to come out.
Decoherence is nice work inspired by ‘many worlds’ but it applies equally to ‘Copenhagen-like’ interpretations.
I was trying something pretty crazy. Using a two-time theory (Mueller from Duke) to try to collapse the wave function (or more like cancel it out). I couldn’t figure out how to make it work (which doesn’t mean much one way or the other). I even managed to write down an solve the Dirac eqn. in two times formalism, but couldn’t figure out what to do with it. The idea was that measurement doesn’t ’cause’ collapse, but collapse causes measurement. I didn’t get anywhere.
Though it does bring up the issue of what cause means in QM measurement theory. While we typically say measurement causes the collapse, since it’s non-local the time-order is not defined and it’s not clear what that means (though it gets the right answer).
8. synred
You don’t need to have non-determinant dice throwing to explain our observations. Over very long periods of time entropy may occasionally (very occasionally) go down even with in determinant dice. The second law is not absolute though very nearly so…
The only thing indeterminism buys you is that you can’t reverse all the particles and get them to go back in the corner reliably. It could still happen (if you could pull off the reversal — lots of Maxwell/s demons pushing particles in sync [a]), it just makes it less likely to work.
Which is not to say I don’t think QM is indeterminate, it’s just in determinism is not needed to explain arrow-of-time observation. It is needed to compute the probability of Schrödinger’s cat dying in a given interval. [b].
[a] I’m guessing the demons entropy would go up quite a bit in order to carry out this feat.
[b] “Schrodinger’s Cat and the Law” here: https://skepticalsciencereviews.wordpress.com/story-land/
9. couvent2104
But the Schrödinger equation is perfectly deterministic. It’s only when we observe a system that there’s a non-deterministic aspect.
(There’s also a statistical aspect when we describe a system about which we have incomplete information with a density operator, but that’s not the same as the non-deterministic behaviour of a system when it’s observed. The evolution of the density operator is governed by a deterministic law.)
How do observations cause or explain the second law?
Another point: even in special relativity time has a special status. The whole idea of SR is more or less that physical laws should be invariant under transformations that keep the expression
invariant. Exchanging x and y (the spatial coordinates) keeps this expression invariant, but exchanging t and a spatial coordinate not. The Lorentz transformations mix time with the spatial coordinates, but time always keeps its distinct status.
(This is because the “light cone” – which is important to define causality, at least in the context of SR – must stay invariant under transformations, The light cone basically is the geometric figure defined by the expression above, if I remember correctly).
I think it’s physically dubious to treat the “time coordinate” as if it’s equivalent to a “spatial coordinate”. That’s one of the reason I’m suspicious of a block universe.
10. synred
Calling the a universe static when time is only define with in it seems like an oxymoron. W/o time there is no static.
Having said that you certainly think about the universe as laid out in time. Whether that has ‘ontological’ status or not you can argue with disagreeable (and Tegmark about),
11. brodix
Causality is conservation of energy, not sequence/time.
Kicking a ball is a transfer of the energy from your foot and so that action ceases to exist, once the energy is transferred to the ball. So no block time, because the energy is conserved and doesn’t continue to exist as the sequence of events, only the one being manifest by the energy.
Time, on the other hand, is sequence and there might be little effective transfer of energy from one event in a sequence, to the next. For instance, yesterday doesn’t directly cause today, because the energy isn’t directly transferred. Though the energy radiated into the atmosphere yesterday would be causing some weather patterns of today. The sun shining on a spinning planet causes this effect we experience as a sequence of days.
While we experience time a a sequence of events and so think of it as the point of the present moving along this “fourth dimension,” the reality is that change is creating and dissolving those events, transferring the energy through various relationships, as the energy coming together to form one event will then radiate away in multiple directions, becoming input into myriad other events. Thus no single trail of temporal causality.
It is just that our mental processes function narratively and so we have to make sense of these flashes of perception, even though the primary relationship between one and the next may be just in our mental processing. Thus we tend to equate time with motion in space, as it is our point of view in space, that is the primary organizing principle of the relationships between these events we experience.
While the block time model is described in terms of a movie, or book, if you think how they are consumed, it is these events flowing past the observer, from future to past, as the observer is tantamount to the present. So the observer goes from past to future events, while the events themselves, go from future to past.
We could take this out to any scale. For instance, our individual lives, as events, go from being in the future, when we are born, to being in the past, when we die. While the species, as the present manifestation, moves onto the next generation, shedding the old.
So time is asymmetric due to the inherent inertia of energy. It is only that physics treats it as some underlaying dimension, of which the measure of duration supposedly exposes. So the duration of a cup falling from a table would be the same as it jumping back up on the table, under the assumption of time as that foundational dimension. Yet duration is entirely the state of the present, as these events occur and not external to it.
If we think of time as an effect of action, i.e. frequency, then it is more like temperature, which is an effect of frequency and amplitude, than space.
This would make thermodynamic circularity more elemental than temporal linearity. High pressure pushes, just like the causal transfer of energy, while low pressure directs where this energy goes.
The past is where the energy comes from, while the future is where the energy goes.
As for those billiard balls, what about gravity? Doesn’t that pull them into the center, where they break down and radiate the energy back out?
12. Alan White
Very good post, but at a bad time when I can’t devote much effort to run through all the comments. I’m interested because I’m under contract to write about determinism for an anthology. So to cut to the chase: do you think that the complementarity between energy and time is at all revealing to the basis of causality? I do tend to agree about Salmon’s account(s) of causality BTW.
13. synred
One thing that has not been picked up on is ‘the collapse of the wave function’ which in standard QM interpretation is thought of as being ‘caused’ by a measurement. However, as it’s a non-local effect the time order is ill defined.
So how does Philosophy deal with that? Physics doesn’t do well unless you go for ‘many worlds’ which dodges the issue but has its own problems. Or you can take the ‘shut up and calculate’ approach and just not think about it though the non-local effects have been established experimentally by Alain Aspect and others with Bell inequality violations.
14. jkubie
I think you are making this too complicated. Prior to quantum physics, causes equalled forces. That’s it. Newton’s laws. If something changes its trajectory, it is because a force is applied. The force is the “cause” of change. All changes are due to forces. Or am I missing something? I don’t think you need to deal with conservation of energy, etc. Quantum mechanics changes all this, as you note, and some physicists don’t believe forces, as we think about them, exist, they are only there to get equations to work. (it’s all fields, etc). ( a book I found very helpful is Max Jammer’s “Concepts of Force”). (apologies for they typing. Hunt and peck on my ipad).
15. brodix
‘So how does Philosophy deal with that?”
Isn’t the “map” the measurement?
“Well Field Theory is just a theory, but then so is evolution by natural selection. FF is a way reality might be. We’ll likely never know the ‘ding am sich’.”
16. SocraticGadfly
Jkubie, for physicists, maybe, but not for philosophers since Hume.
Re QM, as I’ve said here before, I support the ensemble interpretation precisely because it is minimalist:
Or, perhaps more accurately, some halfway point between it and objective-collapse theories.
17. synred
If you mean by ensemble interpretation what I understand as an ensemble probability it doesn’t work. Bell would not be violated (I think).
18. SocraticGadfly
Massimo, the Stanford bio on Sellars talks about how he dealt with various Humean ideas, including somewhat the problem of induction. But — and I know they were very much contemporaries — it doesn’t discuss at all how he dealt with Goodman’s new problem of induction, or if he even considered it something that needed dealing with. More info?
19. synred
Yeah, I found the Wiki. I don’t buy it. It seems to be implicitly a hidden variables theory with out specifying the variables. I’d ask Dieter Zeh (Mr. decoherence), but I already pestered him about ‘many worlds’ and Born [a] rule this week and don’t want to wear out my welcome. Non local effects have been measured and are even being used in quantum encryption (if Bell isn’t violated your being spied on or at least could be).
[a] It’s my opinion that ‘many worlds’ does not naturally yield the Born probability rule and, thus, has the same problem as ‘Copenhagen’ in that they have to assume it. Maybe worse as the simplest ‘world counting’ in many worlds contradicts the Born rule. If you just count ‘worlds’ ‘worlds’ in which my finger tunnels into the key board are no more uncommon than normal worlds (though any particular world is uncommon as there are so many of hem). Awful things would be happening all the time. Having a small amplitude/wave function norm does not mean that world does not exist. So far tunneling hasn’t happened to me or anyone I’ve heard of. I think we would notice.
Liked by 1 person
20. Robin Herbert
Hi synred,
The same thought has occurred to me.
I also have a problem with the idea that you can derive the Born Rule from epistemic probability, but that is a little more difficult to put.
21. synred
I think ensemble interpretation is inconsistent with Field Theory. In field theory there are no particles in the usual sense. There are quantized excitations off the fields.
That’s what we call particles. There is no wave-particle duality — all is waves/excitations. That wave goes through both slits. The waves tend to clump in ‘packets’ — this is what looks like particles to us.
If you just turn the crank on the FT will clump and decoher and you get many worlds very naturally, but you don’t get the born rule.
The there’s no mystery of why a ‘particle’ can be two places at once in FT. The excitation just happens to have a large amplitude at two well separated places. The mystery is how a spread out excitation comes to be in only once place. Decoherence can mae it clump in a few or many places, but how does only one surive and how does the energy in the field spread out all oer or in a couple of spots suddenly concentrate in one when a measurement occurs. ‘Many worlds’ explains this, but not the Born probability rule which is the key to the experimental success of QM. My opinon is that ‘many worlds’ advocates sweep this under the rug.
Field theory explains the profound identy of ‘particles’, i.e., they are waves and even the Pauli exclusion rule (spin 1/2 Dirac field can only have one excitation in each state). Without Pauli we would not be here, but we don’t need weak anthropic princlple (WAP) to explain it. It isl almost a geometrical property of a spin 1/2 field. So while WAP might be needed to ‘explain’ some detailed constant choice, FT provides the basic structure of states and forces from first principle, just as albert would have liked.
A lovely book and relatively easy to understand, e.g.,, that mattress analogy.
22. Robin Herbert
I have a little program which is just dots moving around the screen and bumping into each other. It would be a stretch even to call it a simple model of ideal gas but it is quite good for thinking about some of these issues.
So if I have a box with faster moving particles and another box with slower moving particles and a gap between them then the average speed of the dots in the slower moving box will increase and the average speed of the dots in the faster moving dots will decrease until they are about the same.
This is at least basic 2nd law behaviour without any randomness.
There are arrangements from which the average speed of the faster moving box increases and the average speed of the slower moving box decreases. But if I had the one of these arrangements and then I changed just one dimension of just one vector the smallest amount that I could possibly change it, then I would just get a system which goes to equilibrium. That gives a feel for just how rare those arrangements are in which entropy decreases without any outside influence and why we never find them.
23. synred
It’d be easy enough to write a little ideal gas simulation. Putting in just random directions would result in them spreading out through out the box even w/o any interactions or coming to thermal equilibrium. Of course if you make ’em all go the same direction you’ll get some pretty odd behavior.
24. Robin Herbert
But if you have interactions then you can start with them all in the same direction, apart from one and you get them spreading out. And you get an equilibrium when one box has faster moving particles and the other has slower moving particles and there is a hole in the barrier between them.
You can write a little differential equation to describe the rate at which the box “cools”.
Interestingly if you decide to try and get reverse 2LT behaviour by reversing time just as they are half way towards equilibrium it doesn’t work – just carries onto equilibrium without a blip. I suppose that is the error of using a “double” type value.
Liked by 1 person
25. Coel
Hi jkubie,
Even in classical mechanics “forces” are not things, they are not ontological entities, they are more of an accounting device. Thus they don’t answer the basic question of causality in the sense of event/thing A causing event/thing B.
While Newton’s gravity law prescribes a force between two masses, it doesn’t answer any questions along the lines of what actually is a “force”, what’s it made of, how does it work, how does one mass know about the existence of the other, and what, if anything, is traveling between the two masses?
Quantum field theory does answer those questions, but then you get into the whole issue of causality within quantum mechanics.
Liked by 1 person
Comments are closed. |
f8528ab0eaaa39ad | Academic Calendar
Unless stated otherwise, the minimum grade acceptable in all course prerequisites is a C-.
English language proficiency requirements
Please note that not all courses are offered every semester.
PHYS 083
3 credits
Adult Basic Education (ABE) Advanced Physics
Pre- or corequisite(s): One of the following: MATH 084, MATH 085, Principles of Mathematics 11 or 12, Applications of Mathematics 11 or 12, Foundations of Mathematics 11 or 12, Pre-calculus 11 or 12, Apprenticeship and Workplace Math 11 or 12, Workplace Math 11, or Apprenticeship Math 12.
Note: Students with other Mathematics 11 or 12 courses, or who are currently enrolled in a Mathematics 11 course, may contact the instructor to request permission to register.
A university preparatory course equivalent to Physics 11. Introduces concepts of measurement, kinematics, dynamics, electricity, heat, waves, and optics.
Note: Students with credit for PHYS 083 cannot take PHYS 100 for further credit.
PHYS 093
3 credits
Provincial-Level Physics
Prerequisite(s): (One of Applications of Mathematics 11, Principles of Mathematics 11, Pre-Calculus 11, Foundations of Mathematics 11, MATH 084, or MATH 085) and one of Physics 11, PHYS 083, or PHYS 100).
This university preparatory course, which is equivalent to B.C’s high school Physics 12 course, covers mechanics, electrostatics, electromagnetism, and waves and optics.
PHYS 100
4 credits
Introductory Physics I
Prerequisite(s): One of the following: (C+ or better in MATH 085), (B or better in one of Principles of Mathematics 11 or Pre-calculus 11), Calculus 12, Apprenticeship Math 12, Principles of Mathematics 12, Pre-calculus 12, MATH 092, MATH 094, MATH 096, MATH 140, COMP 138, or Upgrading and University Preparation Assessment.
Note: One of MATH 093 or MATH 096 is recommended, if not taken previously.
Covers kinematics and dynamics (Newton’s laws), conservation of energy and momentum, wave motion, geometric optics, introductory special relativity, and nuclear reactions.
Note: PHYS 100 has been designed for students who have not taken Physics 11 but who have a strong background in mathematics. PHYS 100 is intended as a superior substitute for Physics 11 with regards to meeting prerequisites and satisfying program requirements.
Note: Students with credit for PHYS 083 cannot take this course for further credit.
PHYS 101
5 credits
Introductory General Physics: Mechanics and Fluids
Prerequisite(s): One of the following: (one of [Principles of Mathematics 12, Pre-calculus 12, MATH 093, MATH 095, MATH 096] and one of [Physics 11, PHYS 083, or PHYS 100]), Physics 12, or PHYS 093.
This introductory non-calculus physics course covers Newtonian mechanics; motion, momentum and energy of particles, rigid rotating bodies, and fluids.
Note: PHYS 111 is the entry course for upper-level physics. Students with credit for PHYS 111 cannot take PHYS 101 for further credit.
Note: Because of the overlap in course material, MATH 111 students should take PHYS 111 instead of PHYS 101.
PHYS 105
5 credits
Heat, Waves, and Optics
Prerequisite(s): (One of [Principles of Mathematics 12, Pre-Calculus 12, MATH 093, MATH 095, MATH 096, or MATH 110] and one of [Physics 11, PHYS 083, or PHYS 100]) or (one of Physics 12, PHYS 093, PHYS 101, or PHYS 111).
An introductory non-calculus physics course covering electric circuits, waves, geometric and wave optics, and thermodynamics.
PHYS 111
5 credits
Prerequisite(s): One of (Principles of Mathematics 12, Pre-calculus 12, MATH 095, or MATH 110) and one of (Physics 11, PHYS 083, or PHYS 100); or Physics 12; or PHYS 093.
Pre- or corequisite(s): MATH 111 highly recommended.
Note: Math 111 with a C or better and MATH 112 are required pre or corequisites for PHYS 112.
This course is intended for students who are planning to study engineering science or life sciences. Topics covered include vectors, kinematics, dynamics, work and energy, collisions, rotational kinematics, rotational dynamics, simple harmonic motion, and gravitation. The object is to understand the fundamental laws of mechanics, to learn how to apply the theory to solve related problems, and to develop a feeling for the order of magnitude of physical quantities in real experiments.
Note: Students cannot take PHYS 100 or PHYS 101 for further credit.
PHYS 112
5 credits
Electricity and Magnetism
Prerequisite(s): MATH 111 and one of (PHYS 111, PHYS 105 with a B, or PHYS 101 with a B+).
Pre- or corequisite(s): One of MATH 112 or MATH 118. Note: The Physics department will waive this requirement for students with an A in PHYS 111.
This course follows PHYS 111 and is designed for students who are planning to continue their studies in physics or any of the other sciences. Topics include electric fields, Gauss's law, electric potential, circuits, Kirchhoff's laws, magnetic fields, magnetic induction, and finally, a study of Maxwell's equations. The laboratory portion of the course uses experiments to reinforce the theory covered in class.
PHYS 221
4 credits
Intermediate Mechanics
Prerequisite(s): (PHYS 111 and PHYS 112) or (PHYS 101 and PHYS 105 with a B+ or higher in each)
Pre- or corequisite(s): MATH 211
This course extends the topics covered in Physics 111. Topics covered include kinematics, motion in polar coordinates, Newton's laws, momentum, work, some mathematical aspects of physics and vector analysis (gradient, divergence, curl, Stokes' theorem and Gauss's law), angular momentum, forced and damped harmonic motion, central forces and Lagrangian mechanics. The laboratory portion of the course includes experiments designed to supplement the theory covered in class.
PHYS 225
3 credits
Waves and Introductory Optics
Prerequisite(s): PHYS 221
Corequisite(s): PHYS 381 recommended
This course builds upon the foundations of mechanics presented in PHYS 221 by extending oscillatory motion from single point masses to continuous bodies. In particular, the course will introduce students to both longitudinal and transverse waves via the wave equation, and describe how energy can be transported through distortions of a continuous medium (like sound waves in air). Properties specific to waves like superposition and interference will also be investigated, and will see application in effects like wave diffraction. As light can be considered to be an electromagnetic wave, students will be able to apply these concepts to the study of Optics (Huygens Principle), and look at simple optical processes like reflection, and refraction from mirrors and lenses. Lastly, the concept of matter waves and quantum theory using the de Broglie hypothesis will be introduced, which will set the stage for the study of Quantum Mechanics in PHYS 351. A small number of experiments will be performed in order to quantify many of the concepts studied.
PHYS 231
3 credits
Prerequisite(s): PHYS 112
Pre- or corequisite(s): MATH 211
Pressure, temperature, kinetic theory, and the Maxwell velocity distribution; heat, work, and the first law; heat capacities, equations of state, and exact and inexact differentials; isothermal, isobaric, isochoric and adiabatic processes, heat engines, phase diagrams, and thermodynamic cycles; thermal expansion, conductive, convective and radiative heat losses, entropy, and the second law.
PHYS 232
3 credits
Experimental Methods in Physics
Prerequisite(s): PHYS 112.
Pre- or corequisite(s): PHYS 221.
An introduction to the techniques involved in designing a physics experiment. There is an emphasis on electric circuits and
electrical measurements, but practical methodologies useful in all experimental physics courses are developed.
PHYS 275
1 credit
Survey of Medical Physics
Prerequisite(s): One of the following: BIO 111, PHYS 105, or PHYS 112.
Overview of the field of Medical Physics, describing the different types of diseases, treatments, and research specialties that Medical Physicists are involved with, job prospects and salary, and the training required for a starting position and for advancement.
Note: Field trips will be required.
Note: Students with credit for PHYS 175 cannot take this course for further credit.
PHYS 311
3 credits
Statistical Physics
Prerequisite(s): PHYS 231) and (one of PHYS 221 or PHYS 381).
Basic statistics and statistical distributions (Binomial, Gaussian, and Poisson); statistical description of particle interactions and equilibrium, phase space, and the number of microstates; micro canonical, canonical, and grand canonical distributions; partition functions, entropy, and the Boltzmann factor; quantum statistics, Fermi-Dirac, and Bose-Einstein systems.
PHYS 312
3 credits
Intermediate Electromagnetism
Prerequisite(s): PHYS 112 and PHYS 381.
Pre- or corequisite(s): MATH 312 recommended
An introduction to vector calculus; electrostatics and magnetostatics, both in vacuum and in materials; and time-dependent electric and magnetic fields including Faraday's law, displacement current, and Maxwell's equations.
PHYS 321
3 credits
Advanced Mechanics
Prerequisite(s): PHYS 221.
Pre- or corequisite(s): PHYS 381.
Motion in non-inertial reference frames, calculus of variations and Lagrange's equations with and without constraints, Hamilton's equations, rotational moment of inertia, motion of rigid bodies in three dimensions, the symmetric top.
PHYS 325
3 credits
Fluid Mechanics
Prerequisite(s): PHYS 221.
Pre- or corequisite(s): PHYS 381.
Fluid mechanics is an important and yet often under-appreciated and neglected aspect of physics; yet an understanding of how fluids behave is important in a diversity of subjects from Astrophysics (stars and planetary bodies) to Microbiology (fluid flow into and out of cells). This course will introduce students to the subject of fluid mechanics from the basic principles of Archimedes and Bernoulli, to the more complex aspects of vortices and streamlines. An emphasis will be placed on the vector description of fluid behaviour, which will necessitate a brief introduction to Cartesian tensors.
PHYS 351
3 credits
Quantum Mechanics
Prerequisite(s): PHYS 225.
Pre- or corequisite(s): PHYS 381.
Quantum theory and the Planck-deBroglie hypotheses, wave-particle duality, uncertainty principle; operators and the Schrödinger equation, statistical interpretation of the wavefunction, solutions for simple one dimensional potentials; position, momentum and energy representations, Dirac Bra-Ket notation, Hilbert space; Coulomb potential and the hydrogen atom, angular momentum and spin.
PHYS 352
3 credits
Special Relativity and Classical Fields
Prerequisite(s): PHYS 221 and PHYS 381
Originally devised by Einstein as a way to explain the electrodynamics of moving bodies, Special Relativity has now become an integral part of our understanding of fundamental physics at all levels. Armed with only the concept of invariance of physical laws under certain transformations, students will discover that length and time are no longer absolutes, but depend on the relative velocity of observers. The three dimensions of space and one dimension of time now become part of a larger structure, which is four-dimensional space-time. In order to understand how objects behave in space-time, students will be introduced to the mathematics of tensors, where they will find the more familiar vectors and scalars as special cases of these mathematical objects. The techniques learned can then be extended to understand how classical fields like electromagnetism arise, and give further insight as to the connection between electric and magnetic fields.
PHYS 375
4 credits
Radiobiology and Radiation Protection
Prerequisite(s): PHYS 275, (one of the following: STAT 104, STAT 106, MATH 270/STAT 270, or PHYS 232), and instructor's permission. Note: Both PHYS 225 and BIO 202 are recommended prerequisite courses.
An introduction to the essentials of radiation protection in different environments (especially medical), as well as the fundamentals of radiobiology, i.e. the study of the behavior of cells when exposed to different forms and levels of radiation.
Note: This course will be held off campus at the BC Cancer Agency (Abbotsford Hospital).
PHYS 381
3 credits
Mathematical Physics
Prerequisite(s): MATH 211 and (one of the following: PHYS 221 or MATH 255) and (one of the following: PHYS 112 or any other MATH course 200-level or above).
Partial and ordinary differential equations. Fourier series/transforms. Legendre polynomials. Laplace transforms. Applications to heat flow and waves. Laplace's equation in 1D, 2D, 3D using Cartesian, polar, and spherical co-ordinates. Special functions including Dirac Delta, Heaviside Theta, Si, Ci, Ei, Erf, Gamma.
Note: This course is offered as PHYS 381, MATH 381, and ENGR 257. Students may take only one of these for credit.
PHYS 382
3 credits
Modern Physics Laboratory I
Prerequisite(s): PHYS 221 or PHYS 232
Corequisite(s): One of PHYS 302, 321, 322, 351 or 410 is strongly recommended
This eclectic laboratory course is designed to give students a chance to perform many traditional and modern experiments. The students will be required to do a selection of experiments from a list spanning the many disciplines of physics: dynamics, optics, solid state physics, fluid dynamics, thermodynamics, electricity, magnetism, electronics, nuclear physics, etc. Students will also have the option of selecting a group of experiments concentrating on one branch of physics (e.g. advanced mechanics, optics, etc.)
PHYS 383
3 credits
Modern Physics Laboratory II
Prerequisite(s): PHYS 382
This laboratory course is a continuation of PHYS 382. Students must complete a different set of experiments than the ones done in PHYS 382 and must present a lab book at the beginning of the course to show the experiments previously completed.
PHYS 393
3 credits
Computational Physics I
Prerequisite(s): PHYS 221.
Symbolic and numerical computational physics focusing on plotting and fitting data; applications of numerical techniques and Monte Carlo methods; simulating and animating time-dependent systems; and random walks and diffusive processes.
PHYS 402
3 credits
Advanced Optics
Prerequisite(s): PHYS 225 and PHYS 312.
Pre- or corequisite(s): PHYS 351 recommended.
Overview of geometric and physical optics, Fermat’s principle of least time, index of refraction, dispersion, Snell’s law, and the reflection and refraction of light from arbitrary shaped surfaces; lenses and mirrors, magnifiers, microscopes and telescopes, the human eye and corrective lenses; Maxwell’s equations and the wave nature of light, interference and diffraction, Fourier optics, polarization, and the Jones calculus.
Note: Students with credit for PHYS 302 cannot take this course for further credit.
PHYS 408
3 credits
Special Topics in Physics
Prerequisite(s): 6 credits of PHYS 300 or above, and permission of the instructor
This class allows for students to study a topic in physics which is not included within the current course offerings of the department. Different topics will be identified by adding a letter to the course number, e.g. 408C, 408D. Interested students should contact the head of the Department of Physics for more information.
PHYS 410
3 credits
History of Physics
Prerequisite(s): Any 300 - level Physics course.
Once students have learned how physics is performed in the current era, they should also learn how it all began. This course surveys the history of physics from its philosophical beginnings, to the 21st century advances affecting the modern world.
PHYS 412
3 credits
Advanced Electromagnetism
Prerequisite(s): PHYS 312.
Electromagnetic stress-energy-momentum tensor; propagation, polarization, reflection, and transmission of electromagnetic waves; the potential formulation of Maxwell’s equations; retarded potentials (including the Liénard-Wiechert potentials) for time-dependent charge and current distributions; classical electromagnetic radiation; and Lorentz transformations of electromagnetic fields.
PHYS 451
3 credits
Advanced Quantum Mechanics
Prerequisite(s): PHYS 351
Three dimensional quantum mechanics and multi-particle states, addition of angular momentum, Clebsch-Gordan coefficients, identical particles, weak and strong Pauli exclusion principle, the periodic table, and spectroscopic notation; perturbation theory, variational principle, Fermi’s golden rule and time dependent potentials; quantum scattering, cross sections and computation of scattering amplitudes.
PHYS 452
3 credits
Introduction to General Relativity
Prerequisite(s): PHYS 352
Einstein’s theory of general relativity; a description of gravity as a consequence of the curvature of spacetime; introduction to differential geometry and geodesics; Schwarzschild metric, gravitational waves, and FLRW cosmology.
PHYS 455
3 credits
Solid State Physics
Prerequisite(s): (PHYS 231) and (PHYS 351).
Pre- or corequisite(s): PHYS 311 recommended.
Binding of molecules and atoms, crystalline structures and Bravais lattices in 2 and 3 dimensions, symmetry operations, and the Miller indices; Bragg’s law and scattering off of crystals, x-ray diffraction, Brillouin zones, and form factors; lattice vibrations and phonons, dispersion relationships, thermal properties of crystals and heat capacities; Fermi levels and electrical properties, Bloch’s theorem and conduction bands.
PHYS 457
3 credits
Particle Physics
Prerequisite(s): PHYS 351. Note: PHYS 352 is recommended as a pre/corequisite.
The Standard Model of particle physics describing electromagnetic, weak, and strong interactions. Analyze decays and scattering processes using relativistic kinematics, conservation laws, and Feynman rules. Determine masses and magnetic dipole moments of light hadrons in the quark model.
PHYS 458
3 credits
Introduction to Nuclear Physics
Prerequisite(s): PHYS 351
Nuclear sizes and range, the periodic table and isotopes; Rutherford scattering (classical and quantum), nuclear form factors, and charge distributions; liquid drop model, binding energy and the Semi Empirical Mass Formula, binding energy curve, nuclear drip lines; shell models and spin-orbit coupling, magic numbers, mirror nuclei, spin and parity states; radioactive decay, fission and fusion, half-life and nuclear stability.
PHYS 481
3 credits
Advanced Mathematical Methods of Physics
Prerequisite(s): PHYS 381
Working physicists analyze physical systems and model them mathematically. The equations that arise are often complicated, so specific mathematical techniques have been developed over the years to solve them. These solutions then predict the future behaviour of that physical system. This course includes: Bessel functions and associated Legendre polynomials and their applications in mechanics, electromagnetism, and the hydrogen atom; the calculus of variations, with applications in classical mechanics, optics, and classical field theory, (with attention to coupled systems); Green function techniques; and applications to strings, electromagnetism, and heat. Students will work many problems initially using pen and paper, and then with Maple and/or C or FORTRAN. Computers will be used to generate numerical and/or graphical solutions.
PHYS 493
3 credits
Computer Algebra Physics II
Pre- or corequisite(s): PHYS 393 and PHYS 381
This course extends and augments the problem-solving skills of physics students taught in PHYS 393. Problems amenable to solving with computer algebra systems will be emphasized. The problem-solving emphasis will be on an understanding of the physics and on checking whether the solution correctly predicts the actual physical behaviour.
Last extracted: October 31, 2019 02:56:59 PM
Current Students |
d370a3de3eb12548 | Quantum Walks
For a readable introduction to quantum walks see these Azimuth Blog posts as well as the review articles on walks listed at the end of this page.
A ‘single particle quantum walker’ moves on a graph, with dynamics governed by Schrödinger’s equation. Originally proposed by Richard Feynman, these days quantum walks have become a standard model used to study quantum transport and in fact, quantum walks can even represent a universal model of quantum computation. Recently we developed a theory of quantum walks which exhibit time-reversal symmetry breaking in their node-to-node transition probabilities. We named these, ‘chiral quantum walks‘.
Defining 'Chiral' Quantum Walks
• Quantum Transport Enhancement by Time-Reversal Symmetry Breaking
by Zoltan Zimboras, Mauro Faccin, Zoltan Kadar, James Whitfield, Ben Lanyon and Jacob Biamonte
Scientific Reports 3, 2361 (2013)
The main application we have found so far is to use this symmetry breaking as a passive means to control and direct quantum transport. We mathematically classified time-symmetric and time-asymmetric Hamiltonians and quantum circuits in terms of their underlying network elements and geometric structures. And we numerically studied several illustrative examples.
In a recent collaboration, we helped experimentally implemented the most fundamental time-reversal asymmetric process. The experiments applied local gates in an otherwise time-symmetric quantum circuit to induce time-reversal asymmetry and thereby achieve (i) directional biasing in the transition probability between basis states, (ii) the controlled enhancement of and hence (iii) the controlled suppression of these transport probabilities.
Our results imply that the physical effect of time-symmetry breaking can play a role in coherent transport and offer an alternative means to control a quantum process. We have found the effect to be omnipresent in a range of quantum information protocols and algorithms and hence provides what might turn out to be a useful yet untapped resource. We have worked towards classification of the network configurations that give rise to the effect.
Scientific Reports 3, 2361 (2013)
Quantum Transport Enhancement by Time-Reversal Symmetry Breaking
Scientific Reports 3, 2361 (2013)
abstract and #openaccess link
Abstract. Models of continuous time quantum walks, which implicitly use time-reversal symmetric Hamiltonians, have been intensely used to investigate the effectiveness of transport. Here we show how breaking time-reversal symmetry of the unitary dynamics in this model can enable directional control, enhancement, and suppression of quantum transport. Examples ranging from exciton transport to complex networks are presented. This opens new prospects for more efficient methods to transport energy and information. [PDF]
Physical Review A 93, 042302 (2016)
Chiral Quantum Walks
DaWei Lu, Jacob Biamonte, Jun Li, Hang Li, Tomi H. Johnson, Ville Bergholm, Mauro Faccin, Zoltán Zimborás, Raymond Laflamme, Jonathan Baugh and Seth Lloyd
Physical Review A 93, 042302 (2016)
abstract and PDF
Abstract. Given its importance to many other areas of physics, from condensed matter physics to thermodynamics, time-reversal symmetry has had relatively little influence on quantum information science. Here we develop a network-based picture of time-reversal theory, classifying Hamiltonians and quantum circuits as time-symmetric or not in terms of the elements and geometries of their underlying networks. Many of the typical circuits of quantum information science are found to exhibit time-asymmetry. Moreover, we show that time-asymmetry in circuits can be controlled using local gates only, and can simulate time-asymmetry in Hamiltonian evolution. We experimentally implement a fundamental example in which controlled time-reversal asymmetry in a palindromic quantum circuit leads to near-perfect transport. Our results pave the way for using time-symmetry breaking to control coherent transport, and imply that time-asymmetry represents an omnipresent yet poorly understood effect in quantum information science.
Classification of Time-Reversal Symmetry Breaking in Quantum Walks
Jacob Biamonte and Jacob Turner
Draft available on request. (2016)
Review articles on [time symmetric] quantum walks
Quantum walks: a comprehensive review,S.E. Venegas-Andraca, Quantum Information Processing vol. 11(5), pp. 1015-1106 (2012) arXiv:1201.4780
Quantum random walks – an introductory overview, Julia Kempe, Contemporary Physics, Vol. 44 (4), p.307-327, 2003 arXiv:quant-ph/0303081
Decoherence in quantum walks – a review, Viv Kendon, Math. Struct. in Comp. Sci 17(6) pp 1169-1220 (2006) arXiv:quant-ph/0606016
Our other papers on quantum walks
Degree Distribution in Quantum Walks on Complex Networks
Mauro Faccin, Jacob Biamonte,Tomi Johnson, Sabre Kais, Piotr Migdał
Physical Review X 3, 041007 (2013)
abstract, popular summary and open access PDF
In this theoretical study, we analyze quantum walks on complex networks, which model network-based processes ranging from quantum computing to biology and even sociology. Specifically, we analytically relate the average long-time probability distribution for the location of a unitary quantum walker to that of a corresponding classical walker. The distribution of the classical walker is proportional to the distribution of degrees, which measures the connectivity of the network nodes and underlies many methods for analyzing classical networks, including website ranking. The quantum distribution becomes exactly equal to the classical distribution when the walk has zero energy, and at higher energies, the difference, the so-called quantumness, is bounded by the energy of the initial state. We give an example for which the quantumness equals a Rényi entropy of the normalized weighted degrees, guiding us to regimes for which the classical degree-dependent result is recovered and others for which quantum effects dominate.
Popular Summary
Imagine a web surfer mindlessly wandering from one page to another by clicking randomly on one of the many hyperlinks on each page they encounter. Where would they end up? This question and the answer to it actually are essential to how Google’s web-search engine decides the relative importance of the world’s webpages. Algorithmically, the world’s webpages are represented by a huge network of nodes (pages) and links (hyperlinks) and the mindless internet surfer by a “random walker.” Now, what happens if the random walker is quantum mechanical instead? This may sound like a question for science fiction, but it is actually part of a recent fundamental drive toward merging the science of complex networks—relevant to many scientific disciplines, including statistical physics, biology, computer science, and social science—with quantum mechanics. In this paper, we make one of the first steps in that drive: to uncover and delineate some of the fundamental connections and differences between classical and quantum networks, by developing and investigating a revealing toy model of quantum random walks on complex networks.
It is well known that for a classical random walker on a complex network, the probability of finding the walker on a node after a long time is proportional to the probability of that node’s degree (or the number of links to other nodes), reflecting only the network’s connective topology. A quantum walker, however, brings conceptually nontrivial subtleties to the problem, including hallmark quantum effects such as quantum interference and the ability of a walker to be in a coherent superposition of states. In addition, the long-time state of a quantum walker depends on its initial state and most often does not converge to a steady state.
Here, we have constructed a model of a quantum walker on a network. The walker’s state is a multicomponent one, with the squared amplitude of the ith component representing the probability of finding the walker at node i of the network. This multicomponent state evolves in time according to a Schrödinger equation that has a correspondence with the classical walker. By investigating this model, we have succeeded in uncovering the following properties: (1) When the walker starts from its zero-energy ground state, the long-time average of probability of finding it at a node follows the classical result; (2) at higher energies, the walker’s long-time behavior deviates from the classical case, reflecting its quantumness, and this quantumness is quantitatively bounded by the initial energy of the walker and equal to Rényi entropy—a property associated with the network’s degree distribution.
Our paper thus provides the first analytical connection between classical and quantum walks on complex networks, as well as highlighting their differences. We see this work as the beginning of an exciting development that will involve quantum physics, graph and network theory, and the physics of stochastic processes.
Community Detection in Quantum Complex Networks
Mauro Faccin, Piotr Migdał, Tomi Johnson, Ville Bergholm, Jacob Biamonte
Physical Review X 4, 041012 (2014)
abstract, popular summary and open access PDF
Determining community structure is a central topic in the study of complex networks, be it technological, social, biological or chemical, static or in interacting systems. In this paper, we extend the concept of community detection from classical to quantum systems—a crucial missing component of a theory of complex networks based on quantum mechanics. We demonstrate that certain quantum mechanical effects cannot be captured using current classical complex network tools and provide new methods that overcome these problems. Our approaches are based on defining closeness measures between nodes, and then maximizing modularity with hierarchical clustering. Our closeness functions are based on quantum transport probability and state fidelity, two important quantities in quantum information theory. To illustrate the effectiveness of our approach in detecting community structure in quantum systems, we provide several examples, including a naturally occurring light-harvesting complex, LHCII. The prediction of our simplest algorithm, semiclassical in nature, mostly agrees with a proposed partitioning for the LHCII found in quantum chemistry literature, whereas our fully quantum treatment of the problem uncovers a new, consistent, and appropriately quantum community structure.
Popular Summary
Real-life networks such as groups of animals and biochemical assemblies exhibit complex relationships that can benefit from systematic study. The macroscopic properties of a network cannot be easily deduced from knowledge of its microscopic properties. Such a deduction is aided by the identification of strongly connected subnetworks, called communities. For traditional networked systems, the problem of community detection has, accordingly, received a significant amount of attention, and a multitude of techniques are employed for this task, often based on dynamical processes within the network. No methods are currently known for community detection in quantum networks, despite a growing interest in large networks in quantum biology, transport, and communication. We extend the concept of community detection from classical to quantum systems, providing a crucial missing tool for analyzing quantum systems with a network structure.
We argue that breaking down a quantum system into strongly correlated parts, i.e., a form of community partitioning, is an essential precursor for any simulation that aims to use this partitioning to reduce computational costs. We adapt traditional community detection methods that, as their starting point, use a measure of “closeness” of any two basic network components, denoted “nodes.” The computational costs of simulations scale exponentially with the number of nodes. We investigate quantum systems that are generally smaller than the classical systems typically studied, and we naturally ensure that the closeness measure captures relevant quantum effects, which can therefore lead to partitionings that are significantly different than those expected based on classical analyses. We partition nodes into communities using a quantum-walk process, which is akin to partitioning Hilbert space into orthogonal subspaces, illustrating our analyses on a light-harvesting complex.
We anticipate that our results will be useful for conducting numerical analyses of these systems.
Popular Media on Chiral Quantum Walks
• First experiment to break time-reversal symmetry in quantum walks, Institute for Quantum Computing, University of Waterloo – Press Release
Experimental data and quantum circuit from Physical Review A 93, 042302 (2016).
Classification of time-symmetry breaking in quantum walks
Moscow State University – July 25, 2016
abstract slides
Quantum walks on graphs represent an established model capturing essential physics behind a host of natural and synthetic phenomena. Quantum walks have further been proven to provide a universal model of quantum computation and have been shown to capture the core underlying physics of several biological processes in which quantum effects play a central role. A ‘single particle quantum walker’ moves on a graph, with dynamics governed by Schrödinger’s equation and quantum theory predicts the probability of a walker to transition between the graphs’ nodes. Any quantum process in finite dimensions can be viewed as a single particle quantum walk.
Until recently, quantum walks implicitly modeled only probability transition rates between nodes which were symmetric under time inversion. Breaking this time-reversal symmetry provides a new arena to consider applications of this symmetry breaking and to better understand its foundations. The main application discovered so far is that this symmetry breaking can be utilized as a passive means to control and direct quantum transport.
A subtle interplay between the assignment of complex Hamiltonian edge weights and the geometry of the underlying network has emerged in a sequence of studies. This interplay has been central to several works, but in the absence of definitive statements, past work has only produced criteria for a process on a graph to be time-symmetric. Leaving the classification problem and its implications, open.
Here we provide a full classification of the Hamiltonians which enable the breaking of time-reversal symmetry in their induced transition probabilities. Our results are furthermore proven in terms of the geometry of the corresponding Hamiltonian support graph. We found that the effect can only be present if the underlying support graph is not bipartite whereas certain bipartite graphs give rise to transition probability suppression, but not broken time-reversal symmetry. These results fill an important missing gap in understanding the role this omnipresent effect has in quantum information science.
Using mathematical analysis, before solving the general case, we motivate our study be solving several toy versions of the time-symmetry classification problem. The general classification results are found using a host of techniques primarily from algebraic geometry and the theory of invariants. In all cases, the connectivity of the network plays a central role in the presence of the effect providing an avenue to explore the effect using tools from complex network and graph theory.
We study a natural equivalence relation on quantum walks using tools from classical representation theory. We proved that a time-asymmetric quantum walk has a non-bipartite Hamiltonian support graph, and that the non-bipartite requirement is strict. Additionally, we showed that a bipartite (and hence time-symmetric) Hamiltonian can be made to break time-symmetry through the addition of non-uniform diagonal Hamiltonian terms-corresponding to self-loops in the network picture. We further proved that the addition of diagonal terms (called disorder in some areas of quantum theory) has no effect on Hamiltonians that are equivalent to real matrices. The general results of the classification rest on the proof of several core theorems, including a result which shows that if all the specific set—which we provide—of invariants of a given Hamiltonian take real values, then the induced evolution will necessarily be time-symmetric. Furthermore, we show that trees are the only graphs where there is a unique quantum walk up to equivalence.
Any quantum process in finite dimensions, including quantum circuits, algorithms, quantum gates, protocols and models of coherent and open quantum transport can be viewed as a single particle quantum walk on a graph. By changing to this framework, we have found the effect to be omnipresent—yet previously unnoticed—in a range of such quantum information protocols and algorithms. The division found in our classification not only fills a gap in the quantum walks literature which has recently began to study time reversal symmetry breaking, but has implications in the future target application areas.
[1] Classification of time-reversal symmetry breaking in quantum walks
Jacob Biamonte and Jacob Turner
Draft available on request. (2016)
Leave a Reply
|
abb6690caa2497d4 | Skip to main content Skip to navigation
PX101 Quantum Phenomena
Lecturer: Oleg Petrenko
Weighting: 6 CATS
This module begins by showing how classical physics is unable to explain some of the properties of light, electrons and atoms. (Theories in physics, which make no reference to quantum theory, are usually called classical theories.) It then deals with some of the key contributions to the development of quantum physics including those of: Planck, who first suggested that the energy in a light wave comes in discrete units or 'quanta'; Einstein, whose theory of the photoelectric effect implied a 'duality' between particles and waves; Bohr, who suggested a theory of the atom that assumed that not only energy but also angular momentum was quantised; and Schrödinger who wrote down the first wave-equations to describe matter.
To describe how the discovery of effects which could not be explained using classical physics led to the development of quantum theory. The module should develop the ideas of wave-particle duality and introduce the wave theory of matter based on Schrödinger's equation.
At the end of the module you should be able to:
1. Discuss how key pieces of experimental evidence implied a wave-particle duality for both light and matter
2. Discuss the background to and issues surrounding Schrödinger's equation. This includes the interpretation of the wavefunction and the role of wavepackets and stationary states
3. Manipulate the time-independent Schrödinger equation for simple 1-dimensional potentials
Waves, particles and thermodynamics before quantum theory
Thermal radiation and the origin of Quantum Theory: Blackbody Radiation, derivation for the case of a `1D black-body', the idea of modes, Wien's law, Rayleigh-Jeans formula, Planck's hypothesis and E=hf . The photoelectric effect - Einstein's interpretation.
Waves or Particles? Interference a problem for the particle picture; the Compton effect - direct evidence for the particle nature of radiation.
Atoms and atomic spectra a problem for classical mechanics. Bohr's Model of the Atom: quantization of angular momentum, atomic levels in hydrogen. De Broglie's hypothesis. Experimental verification of wave-like nature of electrons - electron diffraction
Quantum Mechanics
Correspondence Principle. The Schrödinger wave equation. Relation of the wavefunction to probability density. Probability distribution, need for normalization. Superpositions of waves to give standing waves, beats and wavepackets. Gaussian wavepacket. Use of wavepackets to represent localized particles. Group velocity and correspondence principle again. Wave-particle duality, Heisenberg's uncertainty principle and its use to make order of magnitude estimates.
Using Schrödinger's equation
Including the effect of a potential. Importance of stationary states and time-independent Schrödinger equation. Infinite potential well and energy quantization. The potential step - notion of tunnelling. Alpha decay of nuclei. Status of wave mechanics.
Commitment: 15 Lectures + 3 problems classes
Assessment: 1 hour examination
Recommended Texts: H D Young and R A Freedman, University Physics, Pearson.
This module has a home page with links to various documents and biographies.
Leads from: A-level Physics and Mathematics
Leads to: PX262 Quantum Mechanics and its Applications |
6e1c5525b64b9a3c | The quantum of time and distance
Post scriptum note added on 11 July 2016: This is one of the more speculative posts which led to my e-publication analyzing the wavefunction as an energy propagation. With the benefit of hindsight, I would recommend you to immediately the more recent exposé on the matter that is being presented here, which you can find by clicking on the provided link. In fact, I actually made some (small) mistakes when writing the post below.
Original post:
In my previous post, I introduced the elementary wavefunction of a particle with zero rest mass in free space (i.e. the particle also has zero potential). I wrote that wavefunction as ei(kx − ωt) ei(x/2 − t/2) = cos[(x−t)/2] + i∙sin[(x−t)/2], and we can represent that function as follows:
If the real and imaginary axis in the image above are the y- and z-axis respectively, then the x-axis here is time, so here we’d be looking at the shape of the wavefunction at some fixed point in space.
Now, we could also look at its shape at some fixed in point in time, so the x-axis would then represent the spatial dimension. Better still, we could animate the illustration to incorporate both the temporal as well as the spatial dimension. The following animation does the trick quite well:
Please do note that space is one-dimensional here: the y- and z-axis represent the real and imaginary part of the wavefunction, not the y- or z-dimension in space.
You’ve seen this animation before, of course: I took it from Wikipedia, and it actually represents the electric field vector (E) for a circularly polarized electromagnetic wave. To get a complete picture of the electromagnetic wave, we should add the magnetic field vector (B), which is not shown here. We’ll come back to that later. Let’s first look at our zero-mass particle denuded of all properties, so that’s not an electromagnetic wave—read: a photon. No. We don’t want to talk charges here.
OK. So far so good. A zero-mass particle in free space. So we got that ei(x/2 − t/2) = cos[(x−t)/2] + i∙sin[(x−t)/2] wavefunction. We got that function assuming the following:
1. Time and distance are measured in equivalent units, so = 1. Hence, the classical velocity (v) of our zero-mass particle is equal to 1, and we also find that the energy (E), mass (m) and momentum (p) of our particle are numerically the same. We wrote: E = m = p, using the p = m·v (for = c) and the E = m∙c2 formulas.
2. We also assumed that the quantum of energy (and, hence, the quantum of mass, and the quantum of momentum) was equal to ħ/2, rather than ħ. The de Broglie relations (k = p/ħ and ω = E/ħ) then gave us the rather particular argument of our wavefunction: kx − ωt = x/2 − t/2.
The latter hypothesis (E = m = p = ħ/2) is somewhat strange at first but, as I showed in that post of mine, it avoids an apparent contradiction: if we’d use ħ, then we would find two different values for the phase and group velocity of our wavefunction. To be precise, we’d find for the group velocity, but v/2 for the phase velocity. Using ħ/2 solves that problem. In addition, using ħ/2 is consistent with the Uncertainty Principle, which tells us that ΔxΔp = ΔEΔt = ħ/2.
OK. Take a deep breath. Here I need to say something about dimensions. If we’re saying that we’re measuring time and distance in equivalent units – say, in meter, or in seconds – then we are not saying that they’re the same. The dimension of time and space is fundamentally different, as evidenced by the fact that, for example, time flows in one direction only, as opposed to x. To be precise, we assumed that x and t become countable variables themselves at some point in time. However, if we’re at t = 0, then we’d count time as t = 1, 2, etcetera only. In contrast, at the point x = 0, we can go to x = +1, +2, etcetera but we may also go to x = −1, −2, etc.
I have to stress this point, because what follows will require some mental flexibility. In fact, we often talk about natural units, such as Planck units, which we get from equating fundamental constants, such as c, or ħ, to 1, but then we often struggle to interpret those units, because we fail to grasp what it means to write = 1, or ħ = 1. For example, writing = 1 implies we can measure distance in seconds, or time in meter, but it does not imply that distance becomes time, or vice versa. We still need to keep track of whether or not we’re talking a second in time, or a second in space, i.e. c meter, or, conversely, whether we’re talking a meter in space, or a meter in time, i.e. 1/c seconds. We can make the distinction in various ways. For example, we could mention the dimension of each equation between brackets, so we’d write: t = 1×10−15 s [t] ≈ 299.8×10−9 m [t]. Alternatively, we could put a little subscript (like t, or d), so as to make sure it’s clear our meter is a a ‘light-meter’, so we’d write: t = 1×10−15 s ≈ 299.8×10−9 mt. Likewise, we could add a little subscript when measuring distance in light-seconds, so we’d write x = 3×10m ≈ 1 sd, rather than x = 3×10m [x] ≈ 1 s [x].
If you wish, we could refer to the ‘light-meter’ as a ‘time-meter’ (or a meter of time), and to the light-second as a ‘distance-second’ (or a second of distance). It doesn’t matter what you call it, or how you denote it. In fact, you will never hear of a meter of time, nor will you ever see those subscripts or brackets. But that’s because physicists always keep track of the dimensions of an equation, and so they know. They know, for example, that the dimension of energy combines the dimensions of both force as well as distance, so we write: [energy] = [force]·[distance]. Read: energy amounts to applying a force over a distance. Likewise, momentum amounts to applying some force over some time, so we write: [momentum] = [force]·[time]. Using the usual symbols for energy, momentum, force, distance and time respectively, we can write this as [E] = [F]·[x] and [p] = [F]·[t]. Using the units you know, i.e. joulenewton, meter and seconds, we can also write this as: 1 J = 1 N·m and 1…
Hey! Wait a minute! What’s that N·s unit for momentum? Momentum is mass times velocity, isn’t it? It is. But it amounts to the same. Remember that mass is a measure for the inertia of an object, and so mass is measured with reference to some force (F) and some acceleration (a): F = m·⇔ m = F/a. Hence, [m] = kg = [F/a] = N/(m/s2) = N·s2/m. [Note that the m in the brackets is symbol for mass but the other m is a meter!] So the unit of momentum is (N·s2/m)·(m/s) = N·s = newton·second.
Now, the dimension of Planck’s constant is the dimension of action, which combines all dimensions: force, time and distance. We write: ħ ≈ 1.0545718×10−34 N·m·s (newton·meter·second). That’s great, and I’ll show why in a moment. But, at this point, you should just note that when we write that E = m = p = ħ/2, we’re just saying they are numerically the same. The dimensions of E, m and p are not the same. So what we’re really saying is the following:
1. The quantum of energy is ħ/2 newton·meter ≈ 0.527286×10−34 N·m.
2. The quantum of momentum is ħ/2 newton·second ≈ 0.527286×10−34 N·s.
What’s the quantum of mass? That’s where the equivalent units come in. We wrote: 1 kg = 1 N·s2/m. So we could substitute the distance unit in this equation (m) by sd/= sd/(3×108). So we get: 1 kg = 3×108 N·s2/sd. Can we scrap both ‘seconds’ and say that the quantum of mass (ħ/2) is equal to the quantum of momentum? Think about it.
The answer is… Yes and no—but much more no than yes! The two sides of the equation are only numerically equal, but we’re talking a different dimension here. If we’d write that 1 kg = 0.527286×10−34 N·s2/sd = 0.527286×10−34 N·s, you’d be equating two dimensions that are fundamentally different: space versus time. To reinforce the point, think of it the other way: think of substituting the second (s) for 3×10m. Again, you’d make a mistake. You’d have to write 0.527286×10−34 N·(mt)2/m, and you should not assume that a time-meter is equal to a distance-meter. They’re equivalent units, and so you can use them to get some number right, but they’re not equal: what they measure, is fundamentally different. A time-meter measures time, while a distance-meter measure distance. It’s as simple as that. So what is it then? Well… What we can do is remember Einstein’s energy-mass equivalence relation once more: E = m·c2 (and m is the mass here). Just check the dimensions once more: [m]·[c2] = (N·s2/m)·(m2/s2) = N·m. So we should think of the quantum of mass as the quantum of energy, as energy and mass are equivalent, really.
Back to the wavefunction
The beauty of the construct of the wavefunction resides in several mathematical properties of this construct. The first is its argument:
θ = kx − ωt, with k = p/ħ and ω = E/ħ
Its dimension is the dimension of an angle: we express in it in radians. What’s a radian? You might think that a radian is a distance unit because… Well… Look at how we measure an angle in radians below:
But you’re wrong. An angle’s measurement in radians is numerically equal to the length of the corresponding arc of the unit circle but… Well… Numerically only. 🙂 Just do a dimensional analysis of θ = kx − ωt = (p/ħ)·x − (E/ħ)·t. The dimension of p/ħ is (N·s)/(N·m·s) = 1/m = m−1, so we get some quantity expressed per meter, which we then multiply by x, so we get a pure number. No dimension whatsoever! Likewise, the dimension of E/ħ is (N·m)/(N·m·s) = 1/s = s−1, which we then multiply by t, so we get another pure number, which we then add to get our argument θ. Hence, Planck’s quantum of action (ħ) does two things for us:
1. It expresses p and E in units of ħ.
2. It sorts out the dimensions, ensuring our argument is a dimensionless number indeed.
In fact, I’d say the ħ in the (p/ħ)·x term in the argument is a different ħ than the ħ in the (E/ħ)·t term. Huh? What? Yes. Think of the distinction I made between s and sd, or between m and mt. Both were numerically the same: they captured a magnitude, but they measured different things. We’ve got the same thing here:
1. The meter (m) in ħ ≈ 1.0545718×10−34 N·m·s in (p/ħ)·x is the dimension of x, and so it gets rid of the distance dimension. So the m in ħ ≈ 1.0545718×10−34 m·s goes, and what’s left measures p in terms of units equal to 1.0545718×10−34 N·s, so we get a pure number indeed.
2. Likewise, the second (s) in ħ ≈ 1.0545718×10−34 N·m·s in (E/ħ)·t is the dimension of t, and so it gets rid of the time dimension. So the s in ħ ≈ 1.0545718×10−34 N·m·s goes, and what’s left measures E in terms of units equal to 1.0545718×10−34 N·m, so we get another pure number.
3. Adding both gives us the argument θ: a pure number that measures some angle.
That’s why you need to watch out when writing θ = (p/ħ)·x − (E/ħ)·t as θ = (p·x − E·t)/ħ or – in the case of our elementary wavefunction for the zero-mass particle – as θ = (x/2 − t/2) = (x − t)/2. You can do it – in fact, you should do when trying to calculate something – but you need to be aware that you’re making abstraction of the dimensions. That’s quite OK, as you’re just calculating something—but don’t forget the physics behind!
You’ll immediately ask: what are the physics behind here? Well… I don’t know. Perhaps nobody knows. As Feynman once famously said: “I think I can safely say that nobody understands quantum mechanics.” But then he never wrote that, and I am sure he didn’t really mean that. And then he said that back in 1964, which is 50 years ago now. 🙂 So let’s try to understand it at least. 🙂
Planck’s quantum of action – 1.0545718×10−34 N·m·s – comes to us as a mysterious quantity. A quantity is more than a a number. A number is something like π or e, for example. It might be a complex number, like eiθ, but that’s still a number. In contrast, a quantity has some dimension, or some combination of dimensions. A quantity may be a scalar quantity (like distance), or a vector quantity (like a field vector). In this particular case (Planck’s ħ or h), we’ve got a physical constant combining three dimensions: force, time and distance—or space, if you want. It’s a quantum, so it comes as a blob—or a lump, if you prefer that word. However, as I see it, we can sort of project it in space as well as in time. In fact, if this blob is going to move in spacetime, then it will move in space as well as in time: t will go from 0 to 1, and x goes from 0 to ± 1, depending on what direction we’re going. So when I write that E = p = ħ/2—which, let me remind you, are two numerical equations, really—I sort of split Planck’s quantum over E = m and p respectively.
You’ll say: what kind of projection or split is that? When projecting some vector, we’ll usually have some sine and cosine, or a 1/√2 factor—or whatever, but not a clean 1/2 factor. Well… I have no answer to that, except that this split fits our mathematical construct. Or… Well… I should say: my mathematical construct. Because what I want to find is this clean Schrödinger equation:
∂ψ/∂t = i·(ħ/2m)·∇2ψ = i·∇2ψ for m = ħ/2
Now I can only get this equation if (1) E = m = p and (2) if m = ħ/2 (which amounts to writing that E = p = m = ħ/2). There’s also the Uncertainty Principle. If we are going to consider the quantum vacuum, i.e. if we’re going to look at space (or distance) and time as count variables, then Δx and Δt in the ΔxΔp = ΔEΔt = ħ/2 equations are ± 1 and, therefore, Δp and ΔE must be ± ħ/2. In any case, I am not going to try to justify my particular projection here. Let’s see what comes out of it.
The quantum vacuum
Schrödinger’s equation for my zero-mass particle (with energy E = m = p = ħ/2) amounts to writing:
1. Re(∂ψ/∂t) = −Im(∇2ψ)
2. Im(∂ψ/∂t) = Re(∇2ψ)
Now that reminds of the propagation mechanism for the electromagnetic wave, which we wrote as ∂B/∂t = –∇×and E/∂t = ∇×B, also assuming we measure time and distance in equivalent units. However, we’ll come back to that later. Let’s first study the equation we have, i.e.
ei(kx − ωt) = ei(ħ·x/2 − ħ·t/2)/ħ = ei(x/2 − t/2) = cos[(x−t)/2] + i∙sin[(x−t)/2]
Let’s think some more. What is that ei(x/2 − t/2) function? It’s subject to conceiving time and distance as countable variables, right? I am tempted to say: as discrete variables, but I won’t go that far—not now—because the countability may be related to a particular interpretation of quantum physics. So I need to think about that. In any case… The point is that x can only take on values like 0, 1, 2, etcetera. And the same goes for t. To make things easy, we’ll not consider negative values for x right now (and, obviously, not for t either). But you can easily check it doesn’t make a difference: if you think of the propagation mechanism – which is what we’re trying to model here – then x is always positive, because we’re moving away from some source that caused the wave. In any case, we’ve got a infinite set of points like:
• ei(0/2 − 0/2) ei(0) = cos(0) + i∙sin(0)
• ei(1/2 − 0/2) = ei(1/2) = cos(1/2) + i∙sin(1/2)
• ei(0/2 − 1/2) = ei(−1/2) = cos(−1/2) + i∙sin(−1/2)
• ei(1/2 − 1/2) = ei(0) = cos(0) + i∙sin(0)
In my previous post, I calculated the real and imaginary part of this wavefunction for x going from 0 to 14 (as mentioned, in steps of 1) and for t doing the same (also in steps of 1), and what we got looked pretty good:
graph real graph imaginary
I also said that, if you wonder how the quantum vacuum could possibly look like, you should probably think of these discrete spacetime points, and some complex-valued wave that travels as illustrated above. In case you wonder what’s being illustrated here: the right-hand graph is the cosine value for all possible x = 0, 1, 2,… and t = 0, 1, 2,… combinations, and the left-hand graph depicts the sine values, so that’s the imaginary part of our wavefunction. Taking the absolute square of both gives 1 for all combinations. So it’s obvious we’d need to normalize and, more importantly, we’d have to localize the particle by adding several of these waves with the appropriate contributions. But so that’s not our worry right now. I want to check whether those discrete time and distance units actually make sense. What’s their size? Is it anything like the Planck length (for distance) and/or the Planck time?
Let’s see. What are the implications of our model? The question here is: if ħ/2 is the quantum of energy, and the quantum of momentum, what’s the quantum of force, and the quantum of time and/or distance?
Huh? Yep. We treated distance and time as countable variables above, but now we’d like to express the difference between x = 0 and x = 1 and between t = 0 and t = 1 in the units we know, this is in meter and in seconds. So how do we go about that? Do we have enough equations here? Not sure. Let’s see…
We obviously need to keep track of the various dimensions here, so let’s refer to that discrete distance and time unit as tand lP respectively. The subscript (P) refers to Planck, and the refers to a length, but we’re likely to find something else than Planck units. I just need placeholder symbols here. To be clear: tand lP are expressed in meter and seconds respectively, just like the actual Planck time and distance, which are equal to 5.391×10−44 s (more or less) and 1.6162×10−35 m (more or less) respectively. As I mentioned above, we get these Planck units by equating fundamental physical constants to 1. Just check it: (1.6162×10−35 m)/(5.391×10−44 s) = ≈ 3×10m/s. So the following relation must be true: lP = c·tP, or lP/t= c.
Now, as mentioned above, there must be some quantum of force as well, which we’ll write as FP, and which is – obviously – expressed in newton (N). So we have:
1. E = ħ/2 ⇒ 0.527286×10−34 N·m = FP·lN·m
2. p = ħ/2 ⇒ 0.527286×10−34 N·s = FP·tN·s
Let’s try to divide both formulas: E/p = (FP·lN·m)/(FP·tN·s) = lP/tP m/s = lP/tP m/s = c m/s. That’s consistent with the E/p = equation. Hmm… We found what we knew already. My model is not fully determined, it seems. 😦
What about the following simplistic approach? E is numerically equal to 0.527286×10−34, and its dimension is [E] = [F]·[x], so we write: E = 0.527286×10−34·[E] = 0.527286×10−34·[F]·[x]. Hence, [x] = [E]/[F] = (N·m)/N = m. That just confirms what we already know: the quantum of distance (i.e. our fundamental unit of distance) can be expressed in meter. But our model does not give that fundamental unit. It only gives us its dimension (meter), which is stuff we knew from the start. 😦
Let’s try something else. Let’s just accept that Planck length and time, so we write:
• lP = 1.6162×10−35 m
• t= 5.391×10−44 s
Now, if the quantum of action is equal to ħ N·m·s = FP·lP·tP N·m·s = 1.0545718×10−34 N·m·s, and if the two definitions of land tP above hold, then 1.0545718×10−34 N·m·s = (FN)×(1.6162×10−35 m)×(5.391×10−44 s) ≈ FP 8.713×10−79 N·m·s ⇔ FP ≈ 1.21×1044 N.
Does that make sense? It does according to Wikipedia, but how do we relate this to our E = p = m = ħ/2 equations? Let’s try this:
1. EP = (1.0545718×10−34 N·m·s)/(5.391×10−44 s) = 1.956×109 J. That corresponds to the regular Planck energy.
2. pP = (1.0545718×10−34 N·m·s)/(1.6162×10−35 m) = 0.6525 N·s. That corresponds to the regular Planck momentum.
Is EP = pP? Let’s substitute: 1.956×109 N·m = 1.956×109 N·(s/c) = 1.956×109/2.998×10N·s = 0.6525 N·s. So, yes, it comes out alright. In fact, I omitted the 1/2 factor in the calculations, but it doesn’t matter: it does come out alright. So I did not prove that the difference between my x = 0 and x = 1 points (or my t = 0 and t = 1 points) is equal to the Planck length (or the Planck time unit), but I did show my theory is, at the very least, compatible with those units. That’s more than enough for now. And I’ll come surely come back to it in my next post. 🙂
Post Scriptum: One must solve the following equations to get the fundamental Planck units:
Planck units
We have five fundamental equations for five fundamental quantities respectively: tP, lP, FP, mP, and EP respectively, so that’s OK: it’s a fully determined system alright! But where do the expressions with G, kB (the Boltzmann constant) and ε0 come from? What does it mean to equate those constants to 1? Well… I need to think about that, and I’ll get back to you on it. 🙂 |
220a7306ad3a8e38 | next up previous contents
Next: Evaluation of the Up: Tunneling Theory Previous: Tunneling Theory
General Formulation
Let us consider a one-dimensional, single-band model. In a semiconductor heterostructure, the electron wavefunctions are described most simply by the effective-mass Schrödinger equation:
The form of the kinetic-energy operator in (1) is the simplest Hermitian form one can use if the materials parameters (such as the effective mass ) vary with position [7,8]. Now, let us assume that all variation of the potential and of the materials parameters are confined to an interval , so that outside of this interval the form of Schrödinger 's equation is translationally invariant. Thus, in the regions and (the ``asymptotic regions''), the solutions of Schrödinger 's equation are superpositions of plane waves, and the energies of these plane waves are described by a well-defined dispersion relation . We will refer to quantities in the asymptotic regions by the subscripts l and r for the left- and right-hand regions, respectively.
Now, for any energy and , there will be two independent solutions to Schrödinger 's equation representing electrons incident from the left and the right, respectively. In the asymptotic regions, these solutions will have the form:
In general, because . There exist several rigorous relationships between the transmission and reflection amplitudes and . Invoking Green's identity leads to the current-continuity equations
and an orthogonality condition
One may also invoke time-reversal symmetry to find the relationship between and . Noting that , say, is a solution of Schrödinger 's equation with energy E, it must be possible to write as a linear combination of and . With a bit of manipulation, one finds
In most textbooks, the relations (1--7) are presented with the wavenumbers in place of the velocities . Such expressions are derived within the assumptions that the dispersion relation (or band structure) is perfectly parabolic and does not depend upon position. Neither assumption is warranted in semiconductor heterostructures, as electrons in heterostructure devices frequently explore non-parabolic regions of the energy-band structure, and the band structure itself (particularly the effective mass) will vary with semiconductor composition and thus position. The expressions (1--7) are valid for nonparabolic and spatially varying dispersion relations and should thus always be used. The velocity is the electron group velocity, given by
using the dispersion relation appropriate to the given semiconductor material.
One conventionally defines the transmission probability T as the ratio of the transmitted to the incident flux, or
so that T is same for both directions of incidence. One can also show, using equations (1--7),
(Because the reflection probability is measured on the same side of the system as the incident flux, there is no velocity correction.) Also note that, from (1),
as one would expect.
Finally, we investigate the normalization and orthogonality properties of the scattering states. Because we are dealing with a continuum of states over which we must integrate to evaluate any physical observable, a ``delta-function'' normalization is appropriate. With such a normalization convention, any finite contribution to the inner product, such as the integral over , may be neglected in comparison to the integrals over and . Thus,
A relationship similar to (12) can be written for . The -functions can be rewritten in terms of the energy E using
and similarly for . We then obtain
from equations (1).
next up previous contents
William R. Frensley
Fri Jun 23 15:00:21 CDT 1995 |
c96001fc72b80122 | Friday, February 20, 2015
Many Worlds - a longer view
Here is the pre-edited version of my article for Aeon on the Many Worlds Interpretation of quantum theory. I’m putting it here not because it is any better than the published version (Aeon’s editing was as excellent and improving as ever), but because it gives me a bit more room to go into some of the issues.
In my article I stood up for philosophy. But that doesn’t mean philosophers necessarily get it right either. In the ensuing discussion I have been directed to a talk by philosopher of science David Wallace. Here he criticizes the Copenhagen view that theories are there to make predictions, not to tell us how the world works. He gets a laugh from his audience for suggesting that, if this were so, scientists would have been forced to ask for funding for the LHC not because of what we’d learn from it but so that we could test the predictions made for it.
This is wrong on so many levels. Contrasting “finding out about the world” against “testing predictions of theories” is a totally false opposition. We obviously test predictions of theories to find out if they do a good job of helping us to explain and understand the world. The hope is that the theories, which are obviously idealizations, will get better and better at predicting the fine details of what we see around us, and thereby enable us to tell ever more complete and satisfying stories about why things are this way (and, of course, to allow us to do some useful stuff for “the relief of man’s estate). So there is a sense in which the justification for the LHC derided by Wallace is in fact completely the right one, although that would have been a very poor way of putting it. Almost no one in science (give or take the [very] odd Nobel laureate who capitalizes Truth like some religious crank) talks about “truth” – they recognize that our theories are simply meant to be good working descriptions of what we see, with predictive value. That makes them “true” not in some eternal Platonic sense but as ways of explaining the world that have more validity than the alternatives. No one considers Newtonian mechanics to be “untrue” because of general relativity. So in this regard, Wallace’s attack on the Copenhagen view is trivial. (I don’t doubt that he could put the case better – it’s just that he didn’t do so here.)
What I really object to is the idea, which Wallace repeats, that Many Worlds is simply “what the theory tells you”. To my mind, a theory tells you something if it predicts the corresponding states – say, the electrical current flowing through a circuit, or the reaction rate of an enzymatic process. Wallace asserts that quantum theory “predicts” a you seeing a live Schrödinger’s cat and a you seeing a dead one. I say, show me the equation where those “yous” appear (along with the universes they are in). The best the MWers can do is to say, well, let’s just denote those things as Ψ(live cat) and Ψ(dead cat), with Ψ representing the corresponding universes. Oh please.
Some objectors to my article have been keen to insist that the MWI really isn’t that bizarre: that the other “yous” don’t do peculiar things but are pretty much just like the you-you. I can see how some, indeed many, of them would be. But there is nothing to exclude those that are not, unless you do so by hand: “Oh, the mind doesn’t work that way, they are still rational beings.” What extraordinary confidence this shows in our ability to understand the rules governing human behaviour and consciousness in more parallel worlds than we can possibly imagine: as if the very laws of physics will make sure we behave properly. Collapsing the wavefunction seems a fairly minor sleight of hand (and moreover one we can actually continue to investigate) compared to that. The truth is that we no nothing about the full range of possibilities that the MWI insists on, and nor can we ever do so.
One of the comments underneath my article – and others will doubtless repeat this – makes the remark that Many Worlds is not really about “many universes branching off” at all. Well, I guess you could choose to believe Anonymous Pete instead of Brian Greene and Max Tegmark, if you wish. Or you could follow his link to Sean Carroll’s article, which is one of the examples I cite in my piece of why MWers simple evade the “self” issue altogether.
But you know, my real motivation for writing my article is not to try to bury the MWI (the day I start imagining I am capable of such things, intellectually or otherwise, is the day to put me out to grass), but to provoke its supporters into actually addressing these issues rather than blithely ignoring them while bleating about the (undoubted) problems with the alternatives. Who knows if it will work.
In 2011, participants at a conference on the placid shore of Lake Traunsee in Austria were polled on what the conference was about. You might imagine that this question would have been settled before the meeting was convened – but since the subject was quantum theory, it’s not surprising that there was still much uncertainty. The conference was called “Quantum Physics and the Nature of Reality”, and it grappled with what the theory actually means. The poll, completed by 33 of the participating physicists, mathematicians and philosophers, posed a range of unresolved questions, one of which was “What is your favourite interpretation of quantum mechanics?”
Which interpretations did these experts favour? There were no fewer than 11 answers to choose from (as well as “other” and “none”). The most popular (42%) was the view put forward by Niels Bohr, Werner Heisenberg and their colleagues in the early days of quantum theory, now known as the Copenhagen Interpretation. In third place (18%) was the Many Worlds Interpretation (MWI).
You might not have heard of most of the alternatives, such as Quantum Bayesianism, Relational Quantum Mechanics, and Objective Collapse (which is not, as you might suppose, saying “what the hell”). Maybe you’ve not heard of the Copenhagen Interpretation either. But the MWI is the one with all the glamour and publicity. Why? Because it tells us that we have multiple selves, living other lives in other universes, quite possibly doing all the things that we dream of but will never achieve (or never dare). Who could resist that idea?
Yet you should. You should resist it not because it is unlikely to be true, or even because, since no one knows how to test it, the idea is not truly scientific at all. Those are valid criticisms, but the main reason you should resist it is that it is not a coherent idea, philosophically or logically. There could be no better contender for Wolfgang Pauli’s famous put-down: it is not even wrong.
Or to put it another way: the MWI is a triumph of canny marketing. That’s not some wicked ploy: no one stands to gain from its success. Rather, its adherents are like giddy lovers, blinded to the flaws beneath the superficial allure.
The measurement problem
To understand how this could happen, we need to see why, more than a hundred years after quantum theory was first conceived, experts are still gathering to debate what it means. Despite such apparently shaky foundations, it is extraordinarily successful. In fact you’d be hard pushed to find a more successful scientific theory. It can predict all kinds of phenomena with amazing precision, from the colours of grass and sky to the transparency of glass, the way enzymes work and how the sun shines.
This is because quantum mechanics, the mathematical formulation of the theory, is largely a technique: a set of procedures for calculating what properties substances have based on the positions and energies of their constituent subatomic particles. The calculations are hard, and for anything more complicated than a hydrogen atom it’s necessary to make simplifications and approximations. But we can do that very reliably. The vast majority of physicists, chemists and engineers who use quantum theory today don’t need to go to conferences on the “nature of reality” – they can do their job perfectly well if, in the famous words of physicist David Mermin, they “shut up and calculate”, and don’t think too hard about what the equations mean.
It’s true that the equations seem to insist on some strange things. They imply that very small entities like atoms and subatomic particles can be in several places at the same time. A single electron can seem to pass through two holes at once, interfering with its own motion as if it was a wave. What’s more, we can’t know everything about a particle at the same time: Heisenberg’s uncertainty principle forbids such perfect knowledge. And two particles can seem to affect one another instantly across immense tracts of space, in apparent (but not actual) violation of Einstein’s theory of special relativity.
But quantum scientists just accept such things. What really divides opinion is that quantum theory seems to do away with the notion, central to science from its beginnings, of an objective reality that we can study “from the outside”, as it were. Quantum mechanics insists that we can’t make a measurement without influencing what we measure. This isn’t a problem of acute sensitivity; it’s more fundamental than that. The most widespread form of quantum maths, devised by Erwin Schrodinger in the 1920s, describes a quantum entity using an abstract concept called a wavefunction. The wavefunction expresses all that can be known about the object. But a wavefunction doesn’t tell you what properties the object has; rather, it enumerates all the possible properties it could have, along with their relative probabilities.
Which of these possibilities is real? Is an electron here or there? Is Schrödinger’s cat alive or dead? We can find out by looking – but quantum mechanics seems to be telling us that the very act of looking forces the universe to make that decision, at random. Before we looked, there were only probabilities.
The Copenhagen Interpretation insists that that’s all there is to it. To ask what state a quantum entity is in before we looked is meaningless. That was what provoked Einstein to complain about God playing dice. He couldn’t abandon the belief that quantum objects, like larger ones we can see and touch, have well defined properties at all times, even if we don’t know what they are. We believe that a cricket ball is red even if we don’t look at it; surely electrons should be no different? This “measurement problem” is at the root of the arguments.
Avoiding the collapse
The way the problem is conventionally expressed is to say that measurement – which really means any interaction of a particle with another system that could be used to deduce its state – “collapses” the wavefunction, extracting a single outcome from the range of probabilities that the wavefunction encodes. But the quantum mechanics offers no prescription for how this collapse occurs; it has to be put in by hand. That’s highly unsatisfactory.
There are various ways of looking at this. A Copenhagenist view might be simply to accept that wavefunction collapse is an additional ingredient of the theory, which we don’t understand. Another view is to suppose that wavefunction collapse isn’t just a mathematical sleight-of-hand but an actual, physical process, a little like radioactive decay of an atom, which could in principle be observed if only we had an experimental technique fast and sensitive enough. That’s the Objective Collapse interpretation, and among its advocates is Roger Penrose, who suspects that the collapse process might involve gravity.
Proponents of the Many Worlds Interpretation are oddly reluctant to admit that their preferred view is simply another option. They often like to insist that There Is No Alternative – that the MWI is the only way of taking quantum theory seriously. It’s surprising, then, that in fact Many Worlders don’t even take their own view seriously enough.
That view was presented in the 1957 doctoral thesis of the American physicist Hugh Everett. He asked why, instead of fretting about the cumbersome nature of wavefunction collapse, we don’t just do away with it. What if this collapse is just an illusion, and all the possibilities announced in the wavefunction have a physical reality? Perhaps when we make a measurement we only see one of those realities, yet the others have a separate existence too.
An existence where? This is where the many worlds come in. Everett himself never used that term, but his proposal was championed in the 1970s by the physicist Bryce De Witt, who argued that the alternative outcomes of the experiment must exist in a parallel reality: another world. You measure the path of an electron, and in this world it seems to go this way, but in another world it went that way.
That requires a parallel, identical apparatus for the electron to traverse. More, it requires a parallel you to measure it. Once begun, this process of fabrication has no end: you have to build an entire parallel universe around that one electron, identical in all respects except where the electron went. You avoid the complication of wavefunction collapse, but at the expense of making another universe. The theory doesn’t exactly predict the other universe in the way that scientific theories usually make predictions. It’s just a deduction from the hypothesis that the other electron path is real too.
This picture really gets extravagant when you appreciate what a measurement is. In one view, any interaction between one quantum entity and another – a photon of light bouncing off an atom – can produce alternative outcomes, and so demands parallel universes. As DeWitt put it, “every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies”.
Recall that this profusion is deemed necessary only because we don’t yet understand wavefunction collapse. It’s a way of avoiding the mathematical ungainliness of that lacuna. “If you prefer a simple and purely mathematical theory, then you – like me – are stuck with the many-worlds interpretation,” claims MIT physicist Max Tegmark, one of the most prominent MWI popularizers. That would be easier to swallow if the “mathematical simplicity” were not so cheaply bought. The corollary of Everett’s proposal is that there is in fact just a single wavefunction for the entire universe. The “simple maths” comes from representing this universal wavefunction as a symbol Ψ: allegedly a complete description of everything that is or ever was, including the stuff we don’t yet understand. You might sense some issues being swept under the carpet here.
What about us?
But let’s stick with it. What are these parallel worlds like? This hinges on what exactly the “experiments” that produce or differentiate them are. So you’d think that the Many Worlders would take care to get that straight. But they’re oddly evasive, or maybe just relaxed, about it. Even one of the theory’s most thoughtful supporters, Russian-Israeli physicist Lev Vaidman, seems to dodge the issue in his entry on the MWI in the Stanford Encyclopedia of Philosophy:
Vaidman stresses that every world has to be formally accessible from the others: it has to be derived from one of the alternatives encoded in the wavefunction of one of the particles. You could say that the universes are in this sense all connected, like stations on the London Underground. So what does this exclude? Nobody knows, and there is no obvious way of finding out.
I put the question directly to Lev: what exactly counts as an experiment? An event qualifies, he replied “if it leads to more than one ‘story’”. He added: “If you toss a coin from your pocket, does it split the world? Say you see tails – is there parallel world with heads?” Well, that was certainly my question. But I was kind of hoping for an answer.
Most popularizers of the MWI are less reticent. In the “multiverse” of the Many Worlds view, says Tegmark, “all possible states exist at every instant”. One can argue about whether that’s the quite same as DeWitt’s version, but either way the result seems to accord with the popular view that everything that is physically possible is realized in one of the parallel universes.
The real problem, however, is that Many Worlders don’t seem keen to think about what this means. No, that’s too kind. They love to think about what it means – but only insofar as it lets them tell us wonderful, lurid and beguiling stories. The MWI seduces us by multiplying our selves beyond measure, giving us fantasy lives in which there is no obvious limit to what we can do. “The act of making a decision”, says Tegmark – a decision here counting as an experiment – “causes a person to split into multiple copies.”
That must be a pretty big deal, right? Not for theoretical physicist Sean Carroll of the California Institute of Technology, whose article “Why the Many-Worlds formulation of quantum mechanics is probably correct” on his popular blog Preposterous Universe makes no mention of these alter egos. Oh, they are there in the background all right – the “copies” of the human observer of a quantum event are casually mentioned in the midst of the 40-page paper by Carroll that his blog cites. But they are nothing compared with the relief of having to fret about wavefunction collapse. It’s as though the burning question about the existence of ghosts is whether they observe the normal laws of mechanics, rather than whether they would radically change our view of our own existence.
But if some Many Worlders are remarkably determined to avert their eyes, others delight in this multiplicity of self. They will contemplate it, however, only insofar as it lets them tell us wonderful, lurid and beguiling stories about fantasy lives in which there is no obvious limit to what we can do, because indeed in some world we’ve already done it.
Most MWI popularizers think they are blowing our minds with this stuff, whereas in fact they are flattering them. They delve into the implications for personhood just far enough to lull us with the uncanniness of the centuries-old Doppelgänger trope, and then flit off again. The result sounds transgressively exciting while familiar enough to be persuasive.
Identity crisis
In what sense are those other copies actually “us”? Brian Greene, another prominent MW advocate, tells us gleefully that “each copy is you.” In other words, you just need to broaden your mind beyond your parochial idea of what “you” means. Each of these individuals has its own consciousness, and so each believes he or she is “you” – but the real “you” is their sum total. Vaidman puts the issue more carefully: all the copies of himself are “Lev Vaidman”, but there’s only one that he can call “me”.
““I” is defined at a particular time by a complete (classical) description of the state of my body and of my brain”, he explains. “At the present moment there are many different “Levs” in different worlds, but it is meaningless to say that now there is another “I”.” Yet it is also scientifically and, I think, logically meaningless to say that there is an “I” at all in his definition, given that we must assume that any “I” is generating copies faster than the speed of thought. A “complete description” of the state of his body and brain never exists.
What’s more, this half-baked stitching together of quantum wavefunctions and the notion of mind leads to a reductio ad absurdum. It makes Lev Vaidman a terrible liar. He is actually a very decent fellow and I don’t want to impugn him, but by his own admission it seems virtually inevitable that “Lev Vaidman” has in other worlds denounced the MWI as a ridiculous fantasy, and has won a Nobel prize for showing, in the face of prevailing opinion, that it is false. (If these scenarios strike you as silly or frivolous, you’re getting the point.) “Lev Vaidman” is probably also a felon, for there is no prescription in the MWI for ruling out a world in which he has killed every physicist who believes in the MWI, or alternatively, every physicist who doesn’t. “OK, those Levs exist – but you should believe me, not them!” he might reply – except that this very belief denies the riposte any meaning.
The difficulties don’t end there. It is extraordinary how attached the MWI advocates are to themselves, as if all the Many Worlds simply have “copies” leading other lives. Vaidman’s neat categorization of “I” and “Lev” works because it sticks to the tidy conceit that the grown-up "I" is being split into ever more "copies" that do different things thereafter. (Not all MWI descriptions will call this copying of selves "splitting" - they say that the copies existed all along - but that doesn't alter the point.)
That isn't, however, what the MWI is really about – it's just a sci-fi scenario derived from it. As Tegmark explains, the MWI is really about all possible states existing at every instant. Some of these, it’s true, must contain essentially indistinguishable Maxes doing and seeing different things. Tegmark waxes lyrical about these: “I feel a strong kinship with parallel Maxes, even though I never get to meet them. They share my values, my feelings, my memories – they’re closer to me than brothers.”
He doesn't trouble his mind about the many, many more almost-Maxes, near-copies with perhaps a gene or two mutated – not to mention the not-much-like Maxes, and so on into a continuum of utterly different beings. Why not? Because you can't make neat ontological statements about them, or embrace them as brothers. They spoil the story, the rotters. They turn it into a story that doesn't make sense, that can't even be told. So they become the mad relatives in the attic. The conceit of “multiple selves” isn’t at all what the MWI, taken at face value, is proposing. On the contrary, it is dismantling the whole notion of selfhood – it is denying any real meaning of “you” at all.
Is that really so different from what we keep hearing from neuroscientists and psychologists – that our comforting notions of selfhood are all just an illusion concocted by the brain to allow us to function? I think it is. There is a gulf between a useful but fragile cognitive construct based on measurable sensory phenomena, and a claim to dissolve all personhood and autonomy because it makes the maths neater. In the Borgesian library of Many Worlds, it seems there can be no fact of the matter about what is or isn’t you, and what you did or didn’t do.
State of mind
Compared with these problems, the difficulty of testing the MWI experimentally (which would seem a requirement of it being truly scientific) is a small matter. ‘It’s trivial to falsify [MWI]’, boasts Carroll: ‘just do an experiment that violates the Schrödinger equation or the principle of superposition, which are the only things the theory assumes.’ But most other interpretations of quantum theory assume them (at least) too – so an experiment like that would rule them all out, and say nothing about the special status of the MWI. No, we’d quite like to see some evidence for those other universes that this particular interpretation uniquely predicts. That’s just what the hypothesis forbids, you say? What a nuisance.
Might this all simply be a habit of a certain sort of mind? The MWI has a striking parallel in analytic philosophy that goes by the name of modal realism. Ever since Gottfried Leibniz argued that the problem of good and evil can be resolved by postulating that ours is the best of all possible worlds, the notion of “possible worlds” has supplied philosophers with a scheme for debating the issue of the necessity or contingency of truths. The American philosopher David Lewis pushed this line of thought to its limits by asserting, in the position called model realism, that all worlds that are possible have a genuine physical existence, albeit isolated causally and spatiotemporally from ours. On what grounds? Largely on the basis that there is no logical reason to deny their existence, but also because accepting this leads to an economy of axioms: you don’t have to explain away their non-existence. Many philosophers regard this as legerdemain, but the similarities with the MWI of quantum theory are clear: the proposition stems not from any empirical motive but simply because it allegedly simplifies matters (after all, it takes only four words to say “everything possible is real”, right?). Tegmark’s so-called Ultimate Ensemble theory – a many-worlds picture not explicitly predicated on quantum principles but still including them – has been interpreted as a mathematical expression of modal realism, since it proposes that all mathematical entities that can be calculated in principle (that is, which are possible in the sense of being “computable”) must be real. Lewis’s modal realism does, however, at least have the virtue that he thought in some detail about the issues of personal identity it raises.
If I call these ideas fantasies, it is not to deride or dismiss them but to keep in view the fact that beneath their apparel of scientific equations or symbolic logic they are acts of imagination, of “just supposing”. Who can object to imagination? Not me. But when taken to the extreme, parallel universes become a kind of nihilism: if you believe everything then you believe nothing. The MWI allows – perhaps insists – not just on our having cosily familial ‘quantum brothers’ but on worlds where gods, magic and miracles exist and where science is inevitably (if rarely) violated by chance breakdowns of the usual statistical regularities of physics.
Certainly, to say that the world(s) surely can’t be that weird is no objection at all; Many Worlders harp on about this complaint precisely because it is so easily dismissed. MWI doesn’t, though, imply that things really are weirder than we thought; it denies us any way of saying anything, because it entails saying (and doing) everything else too, while at the same time removing the “we” who says it. This does not demand broad-mindedness, but rather a blind acceptance of ontological incoherence.
That its supporters refuse to engage in any depth with the questions the MWI poses about the ontology and autonomy of self is lamentable. But this is (speaking as an ex-physicist) very much a physicist’s blind spot: a failure to recognize, or perhaps to care, that problems arising at a level beyond that of the fundamental, abstract theory can be anything more than a minor inconvenience.
If the MWI were supported by some sound science, we would have to deal with it – and to do so with more seriousness than the merry invention of Doppelgängers to measure both quantum states of a photon. But it is not. It is grounded in a half-baked philosophical argument about a preference to simplify the axioms. Until Many Worlders can take seriously the philosophical implications of their vision, it’s not clear why their colleagues, or the rest of us, should demur from the judgement of the philosopher of science Robert Crease that the MWI is ‘one of the most implausible and unrealistic ideas in the history of science’ [see The Quantum Moment, 2014]. To pretend that the only conceptual challenge for a theory that allows everything conceivable to happen (or at best fails to provide any prescription for precluding the possibilities) is to accommodate Sliding Doors scenarios shows a puzzling lacuna in the formidable minds of its advocates. Perhaps they should stop trying to tell us that philosophy is dead.
No comments: |
3947100b5ebfc11b | The $\delta(x)$ Dirac delta is not the only "point-supported" potential that we can integrate; in principle all their derivatives $\delta', \delta'', ...$ exist also, do they?
If yes, can we look for bound states in any of these $\delta'^{(n)}(x)$ potentials? Are there explicit formulae for them (and for the scattering states)?
To be more precise, I am asking for explicit solutions of the 1D Schroedinger equation with point potential,
$$- {\hbar^2 \over 2m} \Psi_n''(x) + a \ \delta'^{(n)}(x) \Psi(x) \ = E_n \Psi_n(x) $$
I should add that I have read at least of three set of boundary conditions that are said to be particular solutions:
• $\Psi'(0^+)-\Psi'(0^-)= A \Psi(0)$ with $\Psi(0)$ continuous, is the zero-th derivative case, the "delta potential".
• $\Psi(0^+)-\Psi(0^-)= B \Psi'(0)$ with $\Psi'(0)$ continuous, was called "the delta prime potential" by Holden.
• $\lambda \Psi'(0^+)=\Psi'(0^-)$ and $\Psi(0^+)=\lambda\Psi(0^-)$ simultaneusly, was called "the delta prime potential" by Kurasov
The zero-th derivative case, $V(x)=a \delta(x)$ is a typical textbook example, pretty nice because it only has a bound state, for negative $a$, and it acts as a kind of barrier for positive $a$. So it is interesting to ask for other values of $n$ and of course for the general case and if it offers more bound states or other properties. Is it even possible to consider $n$ beyond the first derivative?
Related questions
(If you recall a related question, feel free to suggest it in the comments, or even to edit directly if you have privileges for it)
For the delta prime, including velocity-dependent potentials, the question has been asked in How to interpret the derivative of the Dirac delta potential?
In the halfline $r>0$, the delta is called "Fermi Pseudopotential". As of today I can not see questions about it, but Classical limit of a quantum system seems to be the same potential.
A general way of taking with boundaring conditions is via the theory of self-adjoint extensions of hermitian operators. This case is not very different of the "particle in 1D box", question Why is $ \psi = A \cos(kx) $ not an acceptable wave function for a particle in a box? A general take was the question Physical interpretation of different selfadjoint extensions A related but very exotic question is What is the relation between renormalization and self-adjoint extension? because obviosly the point-supported interacctions have a peculiar scaling
Of course upgrading distributions to look as operators in $L^2$ is delicate, and it goes worse for derivatives of distributions when you consider its evaluation $<\phi | \rho(x) \psi>$. Consider the case $\rho(x) = \delta'(x) = \delta(x) {d\over dx}$. Should the derivative apply to $\psi$ only, or to the product $\phi^*\psi$?
• 1
$\begingroup$ It's difficult to come up with boundary conditions on the barrier. Integrating both sides of the equation infintesimally gives $-\frac{\hbar^2}{2m}[\Psi^{'}_{+}(0)-\Psi^{'}_{-}(0)]=a\Psi^{'}(0)$, but I'm not sure how to interpret that. $\endgroup$ – Jahan Claes Aug 20 '15 at 5:02
• $\begingroup$ @JahanClaes I am not sure of the $a=0$ case, because of the $0 \ \infty$ indeterminacy, but yes for $\delta'^{(0)}$ and $a \neq 0$ this is the usual argument in textbooks, that the equation amounts to require boundary conditions $\Psi'(0^+)-\Psi'(0^-) \propto \Psi(0)$. For $ n > 0$, if the first derivative is continuous then your formula drives to a condition $\Psi'^{(n)}=0$, and I agree that it is unclear how to interpret, and if it is the most general solution. $\endgroup$ – arivero Aug 20 '15 at 16:04
• 2
$\begingroup$ Yeah, but of course I'm not sure we can require that $\Psi^{'}$ is continuous at 0, since it isn't for a $\delta$-function potential. $\endgroup$ – Jahan Claes Aug 20 '15 at 17:24
• 1
$\begingroup$ Are you sure the hack with the "boundary conditions" can be made to work for general distributional potentials? I mean, the wavefunctions are technically functions in $L^2$, which are functions only defined up to a zero-measure set, so you can't evaluate them at points. Then again, a delta function can't even properly act on $L^2$ functions. So, what is the space of functions this Schrödinger equation is supposed to be operating on? I guess you could try to sanitize the $\delta$-case by representing it as a limit of sharply peaked Gaussians, but can you represent the derivatives in such a way? $\endgroup$ – ACuriousMind Aug 22 '15 at 22:40
• 1
$\begingroup$ @ACuriousMind At the very least, we know any solution has to be a sinusoid/exponential away from $x=0$, so all that we need are boundary conditions. This is not to say, of course, that the same trick will work to get the boundary conditions. $\endgroup$ – Jahan Claes Aug 22 '15 at 23:13
Ok, I have a solution for $\delta'(x)$ based on a very crude limit. I'm going to neglect factors of $\hbar$, $m$, etc for the sake of eliminating clutter.
Let $V_\epsilon(x)=\frac{\delta(x+\epsilon)-\delta(x)}{\epsilon}$. Then $\lim_{\epsilon\rightarrow 0}V_e(x)=\delta'(x)$. We'll solve the Schrodinger equation for finite $\epsilon$ and then take the limit afterwards.
We have $\Psi''(x) = (E-V_\epsilon(x))\Psi(x)$. If we want a bound solution, we must have $E<0$. Then in the ranges $[-\infty,0),(0,\epsilon),(\epsilon,\infty]$ we must have that $\Psi$ is some exponential function. In other words, $$ \Psi(x) =\left\{\begin{array}{ll} e^{\sqrt{-E}x} &x\in [-\infty,0)\\ Ae^{\sqrt{-E}x}+Be^{-\sqrt{-E}x} &x\in (0,\epsilon)\\ Ce^{-\sqrt{-E}x}&x\in (\epsilon,\infty]\\ \end{array}\right\} $$
From the fact that $\Psi$ must be continuous, we can replace $B$ and $C$ in terms of $A$ to get $$ \Psi(x) =\left\{\begin{array}{ll} e^{\sqrt{-E}x} &x\in [-\infty,0)\\ Ae^{\sqrt{-E}x}+(1-A)e^{-\sqrt{-E}x} &x\in (0,\epsilon)\\ (1-A+Ae^{2\sqrt{-E}\epsilon})e^{-\sqrt{-E}x}&x\in (\epsilon,\infty]\\ \end{array}\right\} $$
We can also write down the derivative of $\Psi$.
$$ \Psi'(x) =\left\{\begin{array}{ll} \sqrt{-E}e^{\sqrt{-E}x} &x\in [-\infty,0)\\ A\sqrt{-E}e^{\sqrt{-E}x}+(A-1)\sqrt{-E}e^{-\sqrt{-E}x} &x\in (0,\epsilon)\\ (A-1-Ae^{2\sqrt{-E}\epsilon})\sqrt{-E}e^{-\sqrt{-E}x}&x\in (\epsilon,\infty]\\ \end{array}\right\} $$
Using the normal method of finding boundary conditions at a $\delta$ function barrier, we have that
$$ \begin{array}{c} \Psi'_{+}(0)-\Psi'_{-}(0) = \Psi(0) \\ \Psi'_{+}(\epsilon)-\Psi'_{-}(\epsilon) = \Psi(\epsilon) \end{array} $$
The first boundary condition gives us
$$ (2A-1)\sqrt{-E} = \frac{1}{\epsilon}$$ or $$ A=\frac{1}{2\epsilon\sqrt{-E}}+\frac{1}{2} $$
The second boundary condition gives us
$$ -2A\sqrt{-E}e^{\sqrt{-E}\epsilon} = -\frac{1}{\epsilon}[Ae^{\sqrt{-E}\epsilon}+(1-A)e^{-\sqrt{-E}\epsilon}] $$ or $$ A=\frac{1}{e^{2\sqrt{-E}\epsilon}(2\epsilon\sqrt{-E}-1)+1} $$
Putting the two conditions together gives us a constraint on $E$.
We can expand both sides in a Laurent series to first order in $\epsilon$. The left hand side is already expanded. The right hand side becomes (to first order)
$$ \frac{1}{(2\epsilon\sqrt{-E}+1)(2\epsilon\sqrt{-E}-1)+1}=\frac{1}{4\epsilon^2(-E)} $$
The two sides of the equation are then impossible to match in the limit $\epsilon\rightarrow 0$ since they occur at different orders in $\frac{1}{\epsilon}$. Thus, in that limit, no solution for $E$ exists, and so there is no bound state.
I'm sure there's some algebra mistakes in all that mess, but that's the general idea. You could do the same algebra to look at scattering states, if you wanted. One could also apply this method to higher derivatives. For example, $\delta''(0)=\lim_{\epsilon\rightarrow 0}\frac{\delta(x+2\epsilon)-2\delta(x+\epsilon)+\delta(x)}{\epsilon^2}$. Of course, each higher derivative demand one more boundary to account for, so the problem gets correspondingly messier. But doable, in principle.
• $\begingroup$ Kind of Dirac hairpin instead of Dirac comb. I like the approach, specially for higher derivatives of the delta... it is good that they are more boundary conditions because as one increases $n$, one of the puzzles was how to stay only with 2nd order conditions. $\endgroup$ – arivero Aug 23 '15 at 3:15
• 1
$\begingroup$ I have started to review the bibliography and there was an attempt to solve the $\epsilon \to 0$ conflict by using the "Strong Resolvent Limit" of the sequence of hamiltonians. Whatever it is, the result was not encouraging, it amounts to Dirichlet conditions. $\endgroup$ – arivero Aug 23 '15 at 19:14
• $\begingroup$ +1, but one can really just take the "normal" delta derivatives to realize there is no way to match higher derivatives of two exponentials falling off on both sides (aka bound states). This also makes physical sense when you think of the deltas in terms of multipolar expansions; the absence of the mere $\delta$ means that the source has no "overall monopole", i.e. that it has no "overal binding potential". $\endgroup$ – Void Aug 23 '15 at 19:22
• 1
$\begingroup$ @JahanClaes When you investigate the sources of the usual Laplace equation, you will find that the source of a monopole is a delta, a dipole a delta-derivative and so on. Now, by a loose hand-waivy argument, the Schrödinger equation in $x$-representation is somehow a weird Laplace equation (at least it is linear and second order) and we can represent the effect of the "source" $V(x)$ by a sequence of $\delta, \delta',...$ very similarly to the expansion of the source $\rho(x)$ in the Laplace equation. However, I am not sure about the precise applicability of such an expansion. $\endgroup$ – Void Aug 23 '15 at 20:52
• 1
$\begingroup$ @JahanClaes Have you looked into the effect of the 1/$\epsilon$ factor in the potential? I was playing around with the second derivative as a square well surrounded by two barriers. The heights of the barriers are proportional to the cube of the widths. In that case the wells become impenetrable as the width goes to zero. I must add that I haven't completed the analysis. $\endgroup$ – John M. Cavallo Aug 24 '15 at 14:51
Let's simplify things, $\hbar=m=1$ and we put in the $\delta^{(n)}$ as $V(x)=V_0\delta^{(n)}/2$. Then the problem is $$-\psi'' - V_0\delta^{(n)}\psi = E \psi$$ Apart from $x=0$ the equation gives $$\psi'' = -E \psi $$ we want a bound state $E<0$ which falls off at infinity, so we get a solution $\psi_+ = A \exp(-k x)$ for $x>0$ and $\psi_- = A \exp(kx)$ for $x<0$ with $k = \sqrt{-E}$. Now we need to sew the solution around $x=0$. The stated unique nontrivial solution has $\psi \sim$ cusp, $\psi' \sim$ jump, $\psi'' \sim -\delta$, $\psi''' \sim -\delta'$,... around $x=0$. When plugging in the non-trivial $\psi_-,\psi_+$ solution into our dynamical equation we find $$2k^2 A \delta - A V_0 \delta^{(n)} = 0$$ which gives us
1. No solution for $n \neq 0$
2. A single bound state $E=-V_0/2$ for $n=0$ (see that the 1/2 in the definition of the potential strikes back!); $A$ is determined by wave-function normalization.
Now you see that there is really no solution for $\delta$-derivative potentials, at least in one dimension. As already somehow touched upon in the comments, this can be also seen from the fact that all $\delta$-derivatives look like "multi-peaks" of some kind, without any "overall" binding.
To better understand what I mean, consider $\delta'$, which can be obtained by a limiting process of the derivative of a gaussian:enter image description here
It is then somehat physically intuitive that the even though there are bound states in the well on the right, they are somehow eliminated by the infinite squeezing of this double-peak.
• $\begingroup$ Hmm physics.stackexchange.com/questions/143630/… given that if $\int V(x) < 0$ we have granted the existence of bound states, I am under the impression that we are with some kind of borderline case... perhaps with a E=0 state? $\endgroup$ – arivero Aug 26 '15 at 1:11
• 1
$\begingroup$ I am not saying that every function of the limiting process does not have bound states, it does, and that would be easy to prove. It's just that the full $\delta$-derivative does not, because the $\psi_+,\psi_-$ state can essentially do only a cusp corresponding to a $\delta$ potential. (Also note that unlike the theorem you cite, the limiting $V(x)$ is not non-negative, $\int V(x) = 0$ which is not $<0$, and we are dealing with distributions here, so statements such as $V(x)><0$ have no good meaning.) $\endgroup$ – Void Aug 26 '15 at 8:39
The article of D J Griffiths assumes that the delta interaction can be approximated by a sequence of even functions and then infers two boundary conditions: $$ \Psi'(0^+)-\Psi'(0^-)= (-1)^n {m c \over \hbar^2} (\Psi'^{(n)}(0^+)+\Psi'^{(n)}(0^-))$$ $$ \Psi(0^+)-\Psi(0^-)= (-1)^{n-1} {m c \over \hbar^2} n (\Psi'^{(n-1)}(0^+)+\Psi'^{(n-1)}(0^-))$$
My own take travels differently. Integrating $$- {\hbar^2 \over 2m} \Psi''(x) + a \ V(x) \Psi(x) \ = \lambda \Psi (x) $$ from $-\epsilon < 0$ to $u$ we get
$$- {\hbar^2 \over 2m} (\Psi'(u) -\Psi'(-\epsilon)) + a \int_{-\epsilon}^u \ V(x) \Psi(x) \ = \int_{-\epsilon}^u \lambda \Psi (x) $$
Integrating again $u$ from $-\epsilon < 0$ to $v$ $$- {\hbar^2 \over 2m} (\Psi(v)-\Psi(-\epsilon) - (v+\epsilon) \Psi'(-\epsilon)) + a \int_{-\epsilon}^v du \int_{-\epsilon}^u dx \ V(x) \Psi(x) \ = \int_{-\epsilon}^v \int_{-\epsilon}^u \lambda \Psi (x) $$
the first integration gives the boundary condition in the limit $$- {\hbar^2 \over 2m} (\Psi'(0^+) -\Psi'(0^-)) + a <\rho |\Psi(x)> \ = 0 $$
For the second equation, instead of going across multiple integration by parts for each derivative, I think we can use it just one time:
$$ {d\over du} ( u \int_{-\epsilon}^u dx \ V(x) \Psi(x)) = \int_{-\epsilon}^u dx \ V(x) \Psi(x) + u V(u) \Psi(u) $$
$$ ( u \int_{-\epsilon}^u dx \ V(x) \Psi(x))|^v_{-\epsilon} = \int_{-\epsilon}^v\int_{-\epsilon}^u dx \ V(x) \Psi(x) +\int_{-\epsilon}^v u V(u) \Psi(u) $$
So that the limit of the second integration produces the boundary condition
$$- {\hbar^2 \over 2m} (\Psi(0^+) -\Psi(0^-)) - a <\rho |x \Psi(x)> \ = 0 $$
It is immediately visible that for $\rho = \delta'^{(n)}(x)$ my result differs which differs from Griffiths in the sign alternance $(-1)^n$. This is the only discrepancy (the $n$ in the second condition appears naturally, given that $< \delta'^{(n)}(x) | x f(x)>$ is just the delta over $x f'^{(n)} + n f'^{(n-1)}$) so it could be simply some issue on the definition of the n-th derivative of the delta.
In any case, they are more evident objections against the result: it requires to keep track of the regularisation to be sure that all the distributions apply to n-th derivatives by averaging left and right; it does not cover all the possible boundary conditions of a point interaction -well, it was not expected to cover all-, and worse of it, to me, it introduces boundary conditions with derivatives greater than the first, when we are simply solving a second-order differential equation. Question: are such boundary conditions compatible with a self-adjoint hamiltonian? I would think they are not.
Lets look now at the scattering matrix. We apply the boundary conditions to a function $\psi_k(x<0)=e^{ikx}+R e^{-ikx}, \psi_k(x>0)=T e^{ikx}$, so that $$\psi'^{(n)}_k(0^-)=(ik)^n (1+(-1)^n R), \psi_k^{(n)}(0^+)=T (ik) ^n$$
The BC, for $n \geq 1$, solve to:
$$- {\hbar^2 \over 2m} ik (T-1+R) + \frac a2 (ik)^n (T+1+(-1)^n R) \ = 0 $$ $$- {\hbar^2 \over 2m} (T-1-R) + \frac a2 n (ik)^{n-1} (-T-1+(-1)^n R) \ = 0 $$
while for the usual delta, $n=0$,
$$- {\hbar^2 \over 2m} ik (T-1+R) + \frac a2 (T+1+ R) \ = 0 $$ $$- {\hbar^2 \over 2m} (T-1-R) \ = 0 $$
In this case if we solve for the transmission coefficient:
$$ T = {ik \over ik- {a m /\hbar^2 } } $$
and see that it has a pole, at $ik= \frac a2 {2m \over \hbar^2}$, corresponding to the bound state. Not all the poles are bound states, but all the bound states are poles, so this technique can be useful to extract information also in the $n>1$ case.
Ah, note that conservation of probability current implies the extra requirement $T^2+R^2=1$. This could be used to check if the solution is consistent. It is easy to check that this condition will not work for $n>1$, as in this case we can pivot over $ik$ in both equations to produce the extra constraint
$$n (T-1+R)(-T-1+(-1)^n R) = (T-1-R)(T+1+(-1)^n R) $$
which reduces to: $$n(-T^2+(-1)^n TR+1-(-1)^n R-TR-R+(-1)^n R^2) = (+T^2+(-1)^n TR-1-(-1)^n R-RT-R-(-1)^n R^2) $$
for $n$ even
$$n(-T^2+1-2R+ R^2) = (+T^2-1-2R- R^2) $$
for $n$ odd $$n(-T^2- 2 TR+1- R^2) = (+T^2- 2 TR-1+ R^2) $$
To see the incompatibility with preservation of probability we impose simultaneously this condition, sum of $R^2$ plus $T^2$ equal to 1 and we get
• for $n$ even: $n(-1+ R)R = (-1- R)R $ and then $R$ and $T$ should be independent of $k$, which they can not be if $n>1$
• for $n$ odd: $nTR = TR $ and either $RT=0$ or $n=1$, but either $R$ or $T$ equal to zero fixes the other to 1, and then again they are independent of $k$, which they can not be if $n>1$
In conclusion, Griffiths approach allows to give sense to all the derivatives of the delta, but the resulting boundary conditions leak probability for $n>1$
• $\begingroup$ And yet, probability leaking, or non-unitary evolution, could be useful to describe unstable situations. So the deltas produce an interesting family of one-point sinks $\endgroup$ – arivero Aug 29 '15 at 16:26
Your Answer
|
6d3b5ede983d9a26 | Alternative Multipole Expansion of the Electron Correlation Term
Alternative Multipole Expansion of the Electron Correlation Term
An alternative multipole expansion of the correlation term is derived. Modified spherical Bessel type functions which simplify as a summation of multiple orders of basic trigonometric functions are generated from this new method. We use this new expansion to obtain useful insights into the electron-electron interaction. An analytic expression for the electronic correlation term is suggested. Also, a pseudopotential for helium-like system is derived from this alternative expansion, and some reasonable eigenvalues for the ground state and two autoionizing levels of helium atom, is provided as a test of efficiency of this solution approach. With some additional corrections beyond the non-relativistic limit, a helium atom groundstate energy of is obtained using the analytical form derived from this method and the Slater determinant expansion of the wavefunction.
I Introduction
Helium atom and helium-like ions are the simplest many-body systems containing two electrons which interact among themselves in addition to their interaction with the nucleus. The two-electron systems are therefore the ideal candidates for studying the electron correlation effects.
The non-relativistic Hamiltonian of a two-electron system with a nuclear charge is given by
where the first term correspond to the sum of the kinetic energy of each of the two electrons, the second term to the sum of the interactions between each of the electrons and the nucleus, and the last term to the electron correlation interaction between the two electrons. The second and the last term form the potential energy function of a bound two-electron system.
If the Hamiltonian is used to solve the time-independent Schrödinger equation
for any eigenstate of the system, the eigenenergies for the particular state are obtained. The major problem in many-body systems is the correlation term, coupled with the fact that the wavefunction of the system is never exactly known, which complicates the reduction of the Schrödinger equation of the many-body system to a single-particle equation. This makes the solution to the eigenvalue problem difficult. One has to therefore rely on some approximation methods in trying solve such a problem in order to obtain the correct eigenenergies and eigenvectors which may be useful for further estimation of many physical parameters like transition matrices, expectation values, polarizabilities and many others.
Difficult theoretical approaches have been used in the past in dealing with the electron correlation problem. Some of these approaches include the variational Hyleraas method Hylleraas (1929); Drake (1999), coupled channels method Barna and Rost (2003), the configuration interaction method Hasbani et al. (2000), explicitly correlated basis and complex scaling method Scrinzi and Piraux (1998). At present only the Hylleraas method, which includes the interelectronic distance as an additional free co-ordinate, yields the known absolute accuracy of the groundstate energy of the helium atom Pekeris (1959). Configuration interaction methods have also been proved to be accurate but they are quite expensive computationally. To overcome this computational challenge especially for really large systems, single active electron (SAE) methods become advantageous but they also require some approximations in developing the model potentials Parker et al. (2000, 1998) which can further be used to generate the eigenvectors and energies. The development of the SAE models has become an active field of study taking different approximations Chang and Fano (1976) like the independent particle approximation (IPA), multi-configurational Hartree-Fock (HF) Szabo and Ostlund (1996), density functional theory (DFT) Kohn and Sham (1965), random phase approximation (RPA) Linderberg (1980), and many others . The major limitation of SAE approximations is the inability to explain multiple electron features like double excitation, simultaneous excitation and ionization, double ionization but progress is being made towards the realization of these features.
In this paper, an alternative multipole expansion is proposed. Based on this expansion, new modified spherical Bessel type functions are generated. In addition, we suggest an analytic expression
to describe the electron-electron interaction term.
Ii The Alternative Multipole Expansion
The correlation term can be written as
where , corresponds to the lesser (greater) electronic radial distance between the two electrons. In Legendre polynomials, equation (4) is conventionally expressed as Bethe and Salpeter (1957)
where are the Legendre Polynomials of order , and is the relative azimuthal angle between the electron position vectors. In the alternative framework, the correlated term
in equation (4) is expressed in a binomial expansion, similar to Gegenbauer polynomial Gbur (2011); Abramowitz and Stegun (1965) with , and the functions and defined. Ideally, this is the point of departure with equation (5) where the expansion of the correlated term is done as a summation of functions of with as the summation index. The next step involves re-writing the expansion
with as a function of the Legendre polynomials whose symmetry relations are of practical significance in the simplification of integrals using spherical co-ordinates. The coefficients have an intrinsic connection between the index of and the Legendre polynomials , and for even and for odd . The exact recursive pattern for the coefficients is subject to further investigation. Below, we present the pattern
corresponding to but generalized for all values. Substituting and the variables and into equation (7) and simplifying leads to
The correlation interaction in equation (4) can be expressed as a multipole summation series
where are the spherical harmonics and
are the corresponding modified spherical Bessel type functions. If one considers that , and using the trigonometric relations and , the modified spherical Bessels type functions simplify to
The properties of the modified spherical Bessel type functions presented here need to be investigated further. Intuitively, we think that they belong to the family of the hyperspherical functions which usually have some recurrence relations. Equation (12) integrates the two electron co-ordinates as a correlated pair with and where is the distance between the two interacting electrons, equivalent to the hypotenuse of a right-angled triangle formed by the orthogonal vectors and .
The first four orders of the modified spherical Bessel type functions, each with the first four terms of the expansion are:
If one considers only the first term of each modified spherical Bessel type functions, then the correlation term can be expressed as
Our analytical expression in equation (3) is obtained from an intuitive consideration of this alternative multipole expansion series. The simplification using trigonometry in equation (12) implies that the two interacting electrons are mutually orthogonal to each other as expected from the principles of quantum mechanics. This geometry simplifies further the correlation interaction to
which needs to be disentangled further. The proposed alternative multipole expansion, or any other method, can be used to approximate this coupled interaction while employing the fact that the vectors are orthogonal to each other in order to simplify the problem. Equation (15) is exactly similar to the hyperradius definition introduced by Macek Macek (1968) in hyperspherical method. As opposed to the hyperspherical method in which the Hamiltonian of the two-electron system is expressed in terms of the hyperradius and the hyperangles Macek (1968), in this work we introduce separability of the Hamiltonian leading to an independent particle approximation solution to the Schrödinger equation but with the correlation effects fully embedded into the single electron Hamiltonian.
Iii Helium-like System Pseudopotential
Using the alternative multipole expansion, we developed the non-relativistic helium-like system pseudopotential
for the independent particle Hamiltonian, where the first term is the interaction between the active electron and the nuclear charge , and is the central screening potential resulting from the other electron given by equation (15). Factor is based on the assumption that the correlation energy is shared equally between the two correlated electrons. This assumption should be accurate if the two electrons have identical quantum states (or identical principal quantum numbers). We have considered the two electrons to be indistinguishable, correlated, and likely to exchange their relative positions.
By minimising the potential function in equation (16) by differentiating the function with respect to any of the radial co-ordinates and equating the derivative to zero yields the relation
which introduces separability of the correlated term. We have used equation (17) as the screening potential in equation (16) to solve the time independent Schrödinger equation using an independent particle model
where the two-electron wavefunction has been expanded interms of the Slater-type orbitals and define the set of quantum numbers corresponding to any particular state. The first term of equation (18) emanates from the direct integral where no electron exchange is involved while the second term is the exchange integral which is non-vanishing only if . The interaction Hamiltonian
is defined for each independent electron with the index taking integer values. The effective potential is a summation
of some of the several terms of interaction drawn from equation () of Bethe and Salpeter Bethe and Salpeter (1957). Here we have explicitly mentioned and simplified further only the interactions that have been included in this work. The first being the non-relativistic potential term
evaluated using equations (16) and (17) and it incorporates the electron correlation term. The spin-spin interaction correction term can be simplified as
having used equation (17) and where is the reciprocal of the fine structure constant (). Considering the current definition of the electron correlation term, the first term of this spin-spin interaction as defined in equation (39.14) of Bethe and Salpeter Bethe and Salpeter (1957) vanishes because of the boundary conditions of the wavefunction and the Dirac delta condition. The approximation in equation (22) is based on making a classical argument that is equal to instead of the quantum mechanical prescribed value of for the singlet states. This is equivalent to considering only a third of singlet spin-spin interaction term value because the spins are assumed to be aligned parallel or antiparallel to one particular direction.
The term
is a characteristic of the Dirac theory with the potential function already defined in equation (16). The classical relativistic correction
to the interaction between electrons. Here vanishes if . This term reduces to
if it is separated for each of the individual electron co-ordinates. The finite mass correction term
has been obtained from reference Bransden and Joachain (1990) with as the Hamiltonian of the system without the finite mass correction, is the electron-nucleon mass ratio. The scalar function in
is a fitting function optimized to offer the additional correction for the ionic systems but vanishes for the helium atom. The adjustable parameter yields good quantitative agreement with experimental results for the groundstate energies of ionic systems investigated.
We have used the Hamiltonian as defined in equation (19) and diagonalized it in a B spline spectral basis set having a box radius of au, B splines of order , and a non-linear knot sequence. As already stated, the goal was to test efficiency of the present method proposed in this work. With the analytical expression of the electron correlation term, it was also found desirable to include some corrections to the Schrödinger equation for two-electron systems that could be evaluated without further complexities. The inclusion of the correction terms also show the relative importance of the additional interactions as compared to the non-relativistic terms.
The non-relativistic eigenvalue for the groundstate energy of helium resulting from this method is in good agreement with the experimental value as shown in table 1. Furthermore, the discrepancy between the experimental ground state potential and the obtained theoretical non-relativistic value is properly accounted for by including some of the correction terms like spin-spin coupling, classical relativistic correction, the characteristic Dirac theory term, and the finite mass correction term. We can therefore consider the theoretical value from our calculations to be the correct non-relativistic threshold groundstate energy for helium atom. The very accurate groundstate energy as calculated by the Hylleraas method Drake (1999), from this hypothesis, includes all the corrections beyond the non-relativistic energy. This explanation may be justified based on the fact that the accurate value obtained using the Hylleraas method is very close to the experimentally obtained values of Bergeson et. al.Bergeson et al. (1998) and Eikama et. al. Eikema et al. (1997). The experimental values are expected to incorporate all orders of correction beyond the non-relativistic Hamiltonian to the groundstate energy value, including all QED and finite mass corrections. The method adopted in this work, if ascertained to be valid, can be a great numerical feat emanating from the use of perturbative methods to account for the most significant terms responsible for the groundstate energy of helium atom without using any adjustable parameters.
We have also determined the excitation energies of the and autoionizing states from this method to be eV and eV respectively against the known experimental values of eV and eV Rudd (1964) respectively. Although the present method seems to be almost exact for the groundstate eigenvalue for helium, the discrepancy between the theoretical and the experimental values of the and singlet autoionizing states shows that corrections included may still not be sufficient for accurate description of these states.
State Exp.t
-2.9103 -2.8996 -2.8968 -2.9040 -2.9036 -2.9037
-0.7276 -0.7263 -0.7259 -0.7268 -0.7267 -0.7787
-0.7276 -0.7276 -0.7276 -0.7276 -0.7275 -0.6169
Table 1: Some numerically calculated eigenvalues using the present model potential versus the reference experimental values for helium groundstate Bergeson et al. (1998); Eikema et al. (1997) and autoionizing levels Rudd (1964). The represents the theoretical non-relativistic Hamiltonian, is the effective Hamiltonian including correction terms already defined in equations (21)-(26).
We extended the method to other two-electron systems for nuclear charges. Table 2 shows the groundstate energies for the two-electron systems corresponding to the present non-relativistic model and the extra corrections outlined. The additional data is obtained if an additional confinement is introduced by the fitting function defined in equation (27).
-0.2739 -0.2737 -0.2736 -0.2738 -0.2737 -0.5285 -0.528
-2.9103 -2.8996 -2.8968 -2.9040 -2.9036 -2.9036 -2.9037
-8.7482 -8.6756 -8.6555 -8.6966 -8.6959 -7.3794 -7.28
-18.000 -17.746 -17.675 -17.803 -17.802 -13.913 -13.66
-30.776 -30.137 -29.963 -30.257 -30.255 -22.340 -22.03
-47.148 -45.825 -45.474 -46.041 -46.040 -32.413 -32.41
Table 2: Similar to table 1 but for groundstate eigenvalues of helium-like systems. All the columns, except the additional column, take a zero value for the fitting function defined in equation (27). The exact values have been extracted from ref. Bransden and Joachain (1990).
From table 2, one can observe that there is a systematic deviation of the present results from the exact experimental values of the groundstate energies of the ionic systems despite its success with the helium atom. However, if the present model is applied with the additional correction introduced by the fitting function defined in equation (27), quite a good agreement with the expected results is achieved. This seems to suggest that there is an additional potential present in the ionic species due to the net charge in the system, but absent in the neutral atom.
Iv Conclusion
We have developed an alternative multipole expansion of the electron-electron correlation term which suggests that the two interacting electrons are mutually perpendicular to each other. This simplifies the interaction term making the Schrödinger equation separable for each of the two-electron co-ordinates. We use this separability to obtain a non-relativistic threshold energy of the helium atom in its groundstate. We also show perturbatively that the experimental ground state energy value includes additional higher order corrections to the calculated non-relativistic energy.
The classical relativistic corrections and the spin-spin coupling offer the most dominant corrections to the non-relativistic limit. Furthermore, the present method predicts a systematic deviation of the calculated non-relativistic groundstate energies of the two-electron ions relative to the experimental values despite its success with the helium atom. A slight modification to the derived electron correlation term is intuitively introduced to account for this discrepancy. If the present method is justified, the discrepancy in the ionic helium-like systems suggest that there is an additional interactions, due to the charge surplus in the system, not accounted for by the known corrections to the two-electron problem.
Despite the success of the proposed method with the groundstate energy of helium atom, the large deviations for the helium-like ions as well as the autoionizing levels warrant further investigation. One can also see the possibility of improving this method further as a solution to the many-body problem.
V Acknowledgement
We are grateful to NACOSTI and DAAD for funding this project, and to AG Moderne Optik of Humboldt Universität zu Berlin for providing the computational resources used in this work.
1. E. A. Hylleraas, Zeitschrift für Physik 54, 347 (1929).
2. G. W. F. Drake, Physica Scripta 1999, 83 (1999).
3. I. F. Barna and J. M. Rost, Eur. Phys. J. D 27, 287 (2003).
4. R. Hasbani, E. Cormier, and H. Bachau, Journal of Physics B: Atomic, Molecular and Optical Physics 33, 2101 (2000).
5. A. Scrinzi and B. Piraux, Phys. Rev. A 58, 1310 (1998).
6. C. L. Pekeris, Phys. Rev. 115, 1216 (1959).
7. J. S. Parker, L. R. Moore, E. S. Smyth, and K. T. Taylor, Journal of Physics B: Atomic, Molecular and Optical Physics 33, 1057 (2000).
8. J. S. Parker, E. S. Smyth, and K. T. Taylor, Journal of Physics B: Atomic, Molecular and Optical Physics 31, L571 (1998).
9. T. N. Chang and U. Fano, Phys. Rev. A 13, 263 (1976).
10. A. Szabo and N. S. Ostlund, Modern Quantum Chemistry (Dover Publishing, Newyork, 1996).
12. J. Linderberg, Physica Scripta 21, 373 (1980).
13. H. A. Bethe and E. E. Salpeter, Quantum Mechanics of one- and two-electron atoms (Springer-Verlag, Berlin, 1957).
14. G. J. Gbur, Mathematical Methods for Optical Physics and Engineering (Cambridge University Press, Cambridge, 2011).
15. M. Abramowitz and I. A. Stegun, Handbook of mathematical functions: with formulas, graphs, and mathematical tables (Dover Publications, Inc, New York, 1965).
16. J. Macek, Journal of Physics B: Atomic and Molecular Physics 1, 831 (1968).
17. B. H. Bransden and C. J. Joachain, Physics of atoms and molecules (Longman Scientific & Technical, Essex, 1990).
18. S. D. Bergeson, A. Balakrishnan, K. G. H. Baldwin, T. B. Lucatorto, J. P. Marangos, T. J. McIlrath, T. R. O’Brian, S. L. Rolston, C. J. Sansonetti, J. Wen, et al., Phys. Rev. Lett. 80, 3475 (1998).
19. K. S. E. Eikema, W. Ubachs, W. Vassen, and W. Hogervorst, Phys. Rev. A 55, 1866 (1997).
20. M. E. Rudd, Phys. Rev. Lett. 13, 503 (1964).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
You are asking your first question!
How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test description |
11b576898b86ec9f | Citation for this page in APA citation style. Close
Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Louise Antony
Thomas Aquinas
David Armstrong
Harald Atmanspacher
Robert Audi
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Michael Burke
Lawrence Cahoone
Joseph Keim Campbell
Rudolf Carnap
Ernst Cassirer
David Chalmers
Roderick Chisholm
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Herbert Feigl
Arthur Fine
John Martin Fischer
Frederic Fitch
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
Sam Harris
William Hasker
Georg W.F. Hegel
Martin Heidegger
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Walter Kaufmann
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Andrea Lavazza
Christoph Lehner
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Michael Levin
George Henry Lewes
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
Alasdair MacIntyre
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster Wallace
R. Jay Wallace
Ted Warfield
Roy Weatherford
C.F. von Weizsäcker
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf
Michael Arbib
Walter Baade
Bernard Baars
Leslie Ballentine
Gregory Bateson
John S. Bell
Mara Beller
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Donald Campbell
Anthony Cashmore
Eric Chaisson
Gregory Chaitin
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
John Cramer
E. P. Culverwell
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
Joseph Fourier
Philipp Frank
Steven Frautschi
Edward Fredkin
Lila Gatlin
Michael Gazzaniga
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
A. O. Gomes
Brian Goodwin
Joshua Greene
Jacques Hadamard
Mark Hadley
Patrick Haggard
Stuart Hameroff
Augustin Hamon
Sam Harris
Hyman Hartman
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Art Hobson
Jesper Hoffmeyer
E. T. Jaynes
William Stanley Jevons
Roman Jakobson
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
William R. Klemm
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Benjamin Libet
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
James Clerk Maxwell
Ernst Mayr
John McCarthy
Warren McCulloch
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Jürgen Renn/a>
Juan Roederer
Jerome Rothstein
David Ruelle
Tilman Sauer
Jürgen Schmidhuber
Erwin Schrödinger
Aaron Schurger
Claude Shannon
David Shiang
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Lee Smolin
Ray Solomonoff
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
William Thomson (Kelvin)
Giulio Tononi
Peter Tse
Vlatko Vedral
Heinz von Foerster
John von Neumann
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
Stephen Wolfram
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky
Free Will
Mental Causation
James Symposium
Schrödinger's Cat
Erwin Schrödinger's intention for his infamous cat-killing box was to discredit certain non-intuitive implications of quantum mechanics, of which his wave mechanics was the second formulation. Schrödinger's wave mechanics is more continuous mathematically, and apparently more deterministic, than Werner Heisenberg's matrix mechanics.
Schrödinger did not like Niels Bohr's idea of "quantum jumps" between Bohr's "stationary states" - the different "energy levels" in an atom. Bohr's "quantum postulate" said that the jumps between discrete states emitted (or absorbed) energy in the amount hν = E2 - E1.
Bohr did not accept Albert Einstein's 1905 hypothesis that the radiation was a spatially localized quantum of energy hν. Until well into the 1920's, Bohr (and Max Planck, the inventor of the quantum hypothesis himself) believed radiation was a continuous wave. This was the question of wave-particle duality, which Einstein saw as early as 1909.
It was Einstein who originated the suggestion that the superposition of Schrödinger's wave functions implied that two different physical states could exist at the same time. This was a serious interpretational error that plagues the foundation of quantum physics to this day.
This error is found frequently in discussions of so-called "entangled" states (see the Einstein-Podolsky-Rosen experiment).
Entanglement occurs only for atomic level phenomena and over limited distances that preserve the coherence of two-particle wave functions by isolating the systems (and their eigenfunctions) from interactions with the environment.
We never actually "see" or measure any system (whether a microscopic electron or a macroscopic cat) in two distinct states. Quantum mechanics simply predicts a significant probability of the system being found in these different states. And these probability predictions are borne out by the statistics of large numbers of identical experiments.
The Pauli Exclusion Principle says (correctly) that two identical indistinguishable (fermion) particles cannot be in the same place at the same time. Entanglement is often interpreted (incorrectly) as saying that a single particle can be in two places at the same time. Dirac's Principle of Superposition does not say that a particle is in two states at the same time, only that there is a non-zero probability of finding it in either state should it be measured.
Max Born described the somewhat paradoxical result:
Einstein wrote to Schrödinger with the idea that the decay of a radioactive nucleus could be arranged to set off a large explosion. Since the moment of decay is unknown, Einstein argued that the superposition of decayed and undecayed nuclear states implies the superposition of an explosion and no explosion. It does not. In both the microscopic and macroscopic cases, quantum mechanics simply estimates the probability amplitudes for the two cases.
Many years later, Richard Feynman made Einstein's suggestion into a nuclear explosion! (What is it about some scientists?)
Einstein and Schrödinger did not like the fundamental randomness implied by quantum mechanics. They wanted to restore determinism to physics. Indeed Schrödinger's wave equation predicts a perfectly deterministic time evolution of the wave function. But what is evolving deterministically is only abstract probabilities. And these probabilities are confirmed only in the statistics of large numbers of identically prepared experiments. Randomness enters only when a measurement is made and the wave function "collapses" into one of the possible states of the system.
Schrödinger devised a variation in which the random radioactive decay would kill a cat. Observers could not know what happened until the box is opened.
The details of the tasteless experiment include:
• a Geiger counter which produces an avalanche of electrons when an alpha particle passes through it
• a bit of radioactive material with a decay half-life likely to emit an alpha particle in the direction of the Geiger counter during a time T
• an electrical circuit energized by the electrons which drops a hammer
• a flask of a deadly hydrocyanic acid gas, smashed open by the hammer.
The gas will kill the cat, but the exact time of death is unpredictable and random because of the irreducible quantum indeterminacy in the time of decay (and the direction of the decay particle, which might miss the Geiger counter!).
This thought experiment is widely misunderstood. It was meant (by both Einstein and Schrödinger) to suggest that quantum mechanics describes the simultaneous (and obviously contradictory) existence of a live and dead cat. Here is the famous diagram with a cat both dead and alive.
What's wrong with this picture?
Quantum mechanics claims only that the time evolution of the Schrödinger wave functions for the probability amplitudes of nuclear decay accurately predict the proportion of nuclear decays that will occur in a given time interval.
(Classical) probabilities (no interference between terms) simply predict the number of live and dead cats that will be observed in a large number of identical experiments.
Quantum "probability amplitudes" do allow interference between the possible states of a quantum object, but not between macroscopic objects like live and dead cats
More specifically, quantum mechanics provides us with the accurate prediction that if this experiment is repeated many times (the SPCA would disapprove), half of the experiments will result in dead cats.
Note that this is a problem in epistemology. What knowledge is it that quantum physics provides?
If we open the box at the time T when there is a 50% probability of an alpha particle emission, the most a physicist can know is that there is a 50% chance that the radioactive decay will have occurred and the cat will be observed as dead or dying.
If the box were opened earlier, say at T/2, there is only a 25% chance that the cat has died. Schrödinger's superposition of live and dead cats would look like this.
If the box were opened later, say at 2T, there is only a 25% chance that the cat is still alive. Quantum mechanics is giving us only statistical information - knowledge about probabilities.
Schrödinger is simply wrong that the mixture of nuclear wave functions that accurately describes decay can be magnified to the macroscopic world to describe a similar mixture of live cat and dead cat wave functions and the simultaneous existence of live and dead cats.
The kind of coherent superposition of states needed to describe an atomic system as in a linear combination of states (see Paul Dirac's explanation of superposition using three polarizers) does not describe macroscopic systems.
Instead of a linear combination of pure quantum states, with quantum interference between the states, i.e.,
| Cat > = ( 1/√2) | Live > + ( 1/√2) | Dead >,
quantum mechanics tells us only that there is 50% chance of finding the cat in either the live or dead state, i.e.,
Cats = (1/2) Live + (1/2) Dead.
Just as in the quantum case, this probability prediction is confirmed by the statistics of repeated identical experiments, but no interference between these states is seen.
What do exist simultaneously in the macroscopic world are genuine alternative possibilities for future events. There is the real possibility of a live or dead cat in any particular experiment. Which one is found is irreducibly random, unpredictable, and a matter of pure chance.
Genuine alternative possibilities is what bothered physicists like Einstein, Schrödinger, and Max Planck who wanted a return to deterministic physics. It also bothers determinist and compatibilist philosophers who have what William James calls an "antipathy to chance." Ironically, it was Einstein himself, in 1916, who discovered the existence of irreducible chance, in the elementary interactions of matter and radiation.
Until the information comes into existence, the future is indeterministic. Once information is macroscopically encoded, the past is determined.
How does information physics resolve the paradox?
As soon as the alpha particle sets off the avalanche of electrons in the Geiger counter (an irreversible event with a significant entropy increase), new information is created in the world.
For example, a simple pen-chart recorder attached to the Geiger counter could record the time of decay, which a human observer could read at any later time. Notice that, as usual in information creation, the energy expended by a recorder increases the entropy more than the increased information decreases it, thus satisfying the second law of thermodynamics.
Even without a mechanical recorder, the cat's death sets in motion biological processes that constitute an equivalent, if gruesome, recording. When a dead cat is the result, a sophisticated autopsy can provide an approximate time of death, because the cat's body is acting as an event recorder. There never is a superposition (in the sense of the simultaneous existence) of live and dead cats.
The paradox points clearly to the Information Philosophy solution to the problem of measurement. Human observers are not required to make measurements. In this case, the cat is the observer.
In most physics measurements, the new information is captured by apparatus well before any physicist has a chance to read any dials or pointers that indicate what happened. Indeed, in today's high-energy particle interaction experiments, the data may be captured but not fully analyzed until many days or even months of computer processing establishes what was observed. In this case, the experimental apparatus is the observer.
And, in general, the universe is its own observer, able to record (and sometimes preserve) the information created.
The basic assumption made in Schrödinger's cat thought experiments is that the deterministic Schrödinger equation describing a microscopic superposition of decayed and non-decayed radioactive nuclei evolves deterministically into a macroscopic superposition of live and dead cats.
But since the essence of a "measurement" is an interaction with another system (quantum or classical) that creates information to be seen (later) by an observer, the interaction between the nucleus and the cat is more than enough to collapse the wave function. Calculating the probabilities for that collapse allows us to estimate the probabilities of live and dead cats. These are probabilities, not probability amplitudes. They do not interfere with one another.
After the interaction, they are not in a superposition of states. We always have either a live cat or a dead cat, just as we always observe a complete photon after a polarization measurement and not a superposition of photon states, as P.A.M.Dirac explains so simply and clearly.
Quantum mechanics similarly gives us only the probability of finding live cats (or dead cats) in a large number of identically prepared experiments (pace the SPCA)
Superposition and Indeterminacy
There is no justification for assuming an intermediate (and absurd) condition of simultaneous live and dead cats. The thing that is "intermediate" is the probability, not the outcome.
Decoherence and the Lack of Macroscopic Superpositions
Despite the claims of decoherence theorists, microscopic superpositions of quantum states do not allow us to "see" a system in two different states. Quantum mechanics simply predicts a significant probability of the system being found in these different states. Thus it is no surprise that we do not see macroscopic "superpositions of live and dead cats" at the same time. What does exist at any given time is the probabilities of the two states (in the macroscopic world) and the probability amplitude of the two states (which can coherently interfere with one another) in the microscopic world.
Decoherence theorists claim that they explain the "mysterious" non-appearance of macroscopic superpositions of states. But quantum mechanics does not predict such states, despite the popular idea of macroscopic superposition of live and dead cats.
Normal | Teacher | Scholar |
77145228b14f48bf | 非线性问题的建模与计算研讨会(Mathematical Model and Computation of Nonlinear Problems)
2018-1-12 18:07:18 710
Synopsis and Organizers
The nonlinear problem is an important and interesting topic in many research fields. For example, the nonlinear optics model can more accurately describe the light propagation at very high intensities, such as laser; the nonlinear Schrödinger equation is widely considered in condensed matter physics, semiconductor industry and nano-technology; in kinetic theory, the Boltzmann equation and the moment closure systems are both nonlinear models. The main purpose of this workshop is to bring together people working in the nonlinear modeling and computations to exchange their ideas, communicate the latest research results and develop further collaborations.
Topics include but are not limited to the following areas.
l Nonlinear optics and nonlinear Schrödinger equation l Kinetic theory and fluid dynamics l Numerical method for nonlinear equations.
Weizhu BaoNational University of Singapore
Shi JinUniversity of Wisconsin
Yongyong CaiBeijing Computational Science Research Center
Zhongyi HuangTsinghua University
Hao WuTsinghua University |
8ee1febb0712d95d | Quantum Mechanics: Fact and Theory, Physics and Philosophy
From this thou mayest conjecture of what sort
The ceaseless tossing of primordial seeds
Amid the mightier void- at least so far
As small affair can for a vaster serve,
And by example put thee on the spoor
Of knowledge. For this reason too ’tis fit
Thou turn thy mind the more unto these bodies
Which here are witnessed tumbling in the light:
Namely, because such tumblings are a sign
That motions also of the primal stuff
Secret and viewless lurk beneath, behind.
For thou wilt mark here many a speck, impelled
By viewless blows, to change its little course,
And beaten backwards to return again,
Hither and thither in all directions round.
Lo, all their shifting movement is of old,
From the primeval atoms; for the same
Primordial seeds of things first move of self,
And then those bodies built of unions small
And nearest, as it were, unto the powers
Of the primeval atoms, are stirred up
By impulse of those atoms’ unseen blows,
And these thereafter goad the next in size:
Thus motion ascends from the primevals on,
And stage by stage emerges to our sense,
Until those objects also move which we
Can mark in sunbeams, though it not appears
What blows do urge them.
—Lucretius, from De rerum natura, circa 50 BC
In the century since its discovery in the early twentieth century, quantum mechanics* has been employed as evidence for most extraordinary array of conclusions. Aside from inspiring at least a dozen competing interpretations amongst contemporary physicists, quantum mechanics has also permeated the contemporary Zeitgeist and incited innumerable revolutions in popular culture. Despite that nearly a century has elapsed since its formulation, the implications of quantum mechanics remain uncertain. Must we accept such uncertainty as an essential facet of the post-modern era, or might we win through to a comprehension that is not subject to Jeff Bridge’s notorious enunciation of the post-truth condition, “that’s just your opinion, man”? Any eventual understanding will depend on sound logic, and an hallmark of the latter is that the conclusion ought not to precede the reasoning. Let us, therefore, explore the question and accept whatever conclusions may result.
Quantum mechanics was developed to describe the behaviour of minute particles, which were discovered to obey laws that were at odds with any explanation that classical physics could offer. Specifically, Max Planck hypothesised in 1900 that under certain ideal conditions,† a body energy did not emit thermal energy in a continuous stream, but rather in discreet packets. The latter were latter to be called “quanta.” Albert Einstein soon related Planck’s notion of quanta to light, thereby to explain the photoelectric effect. The light quantum was later named “photon.” Quantum mechanics emerged in its recognisable form in 1925 when two separate mathematical systems were formulated to calculate the behaviour of photons (and electrons, after they were later discovered to obey similar laws). Niels Bohr, Werner Heisenberg, and Pascual Jordan developed the equations for matrix mechanics, which described the quantum leaps of electrons about the valences of atomic nuclei. In the same year, Erwin Schrödinger formulated the equations for wave mechanics to describe the wave-like behaviour of particles in a given system, which equations he published a year later in 1926. The Schrödinger wave equations are, without a doubt, amongst the most extraordinary equations in the history of physics. The former allows for the calculation, from specified initial conditions, of the wave-function for subatomic particles in that system. The wave-function represents the probability-distribution of the all of possible states that a particle will be in at any moment.
Given the accuracy of the predictions that the Schrödinger wave equations provide, as well as those of matrix mechanics, it was natural for physicists to conclude that they had hit upon the truth of the matter. “Truth,” in this case, has a very definite meaning: it refers exclusively to the ability to construct a mathematical model to predict the outcomes of a given experiment. It does not include the conceptual understanding that would explain those outcomes. Truth refers to accuracy, not to understanding. But understanding being a natural impulse of the human spirit, physicists immediately set about an attempt to elucidate their findings with the conceptual element which observational data alone does not provide.
By 1927, the “the Copenhagen interpretation” had begun to take form. This was an heroic attempt, piloted by Bohr and his lieutenant Heisenberg, to comprehend the apparent mysteries of quantum phenomena. In its essence, the Copenhagen interpretation acknowledged the probabilistic nature of the equations which describe physical interactions at the quantum scale. It took this as evidence that reality itself at this scale, which is the elementary basis of all other scales, was indeterminate until an actual observation. Heisenberg’s “uncertainty principle,” Bohr’s principle of “complementarity,” and “the observer effect,” are key notions that the Copenhagen interpretation postulated in an attempt to conceptualise such unexpected results as superposition, the wave-function collapse, and the interference pattern in the double slit experiment. Albert Einstein famously rejected the anti-realist, anti-determinist assertion of the Copenhagen interpretation with his affirmation that “God does not play with dice,” suspecting instead that the experiments contained hidden variables that might later come to light. Bohr notoriously responded, “Einstein, stop telling God what to do.” Bohr offers an alternative perspective to Einstein when he writes that “physics is not about how the world is, it is about what we can say about the world,” thereby seeming to imply the actual existence of a real world but only questioning our knowledge of it. This makes for an interesting pluralism of viewpoints when he contradicts Einstein’s deterministic realism and also when he contradicts his own anti-realism. Simply put, the Copenhagen interpretation both affirms and denies the actuality of the physical world. Unless we are willing to deny the reality and intelligibility of the world, to reject the fundamental principle of logic is not a viable solution to the quantum paradoxes.
Given the stakes of this question, is not altogether surprising that some physicists have simply refused to consider it. The physicist David Mermin captured this sentiment in the most expressive manner with the phrase “shut up and calculate!” Given the predictive success of the equations, this way of thinking would see no reason to inquire into their meaning. Still, such an exclusively instrumentalist notion of science will strike many as distinctly unsatisfying. Aristotle capture the natural curiosity of the human being in the opening sentence to Metaphysics when he wrote that “All men, by nature, desire to understand.” Without a doubt, to reduce physics to equations and data-collection will never satisfy this desire. Thus, in respect to quantum mechanics, we must conclude that an understanding still awaits us. A recent survey amongst physicists attending a conference on the foundations of quantum mechanics in 2013 highlighted this lack of understanding. Roughly 40 percent favoured the Copenhagen interpretation and the remainder were distributed amongst a plethora of alternative interpretations. That the Copenhagen interpretation does not seem consistent with itself again emphasises that we have yet to understand the implications of quantum mechanics.
It is important to note that the lack of understanding that the survey above revealed is not a question of experimental results, but of an inability to conceptualise them. In other words, what we lack is not “knowledge that,” but “knowledge why.” Indeed, Schrödinger’s wave equations leave little to be desired in their ability to predict the behaviour of particles. What physicists have failed to agree upon, however, is a coherent account for why the particles behave in the way that they indeed do. An epistemological distinction that Aristotle presents in Posterior Analytics may help to illuminate the situation. The Stagirite famously contrasted scientific knowledge with sophistic knowledge when he wrote that the former can explain the causes by which a thing is while the latter can only say that a thing is. Evidently, quantum mechanics has not met the criterion of scientific knowledge that Aristotle set forth. Although some may question the relevance of an Ancient Greek philosopher to questions of contemporary particle physics, no one should question Aristotle’s capacity to think in clear concepts. Once the data has been collected and the equations formulated, thinking in clear concepts is just the capacity that is necessary if we are ever to understand the nature of quantum mechanics. Indeed, what we lack today is just what mere calculation and experiment can not provide. In the most general sense, the question around quantum mechanics hinges on a conception of what happens between measurements. Such a conception would provide the necessary account of actual observations, without which the latter can only appear enigmatic or arbitrary. Obviously in this case it begs the question to say that the particles behave in the manner that they do because of the equations, since the issue at hand is precisely why they behave in the way that the mathematical model indeed predicts they will. A model is derived from an actual phenomenon and thus it makes no sense to explain the phenomenon with the model that was derived from it.
Given the situation as we have characterised it above, we may wonder whether physics, by the very nature of the discipline today, can provide the account of quantum phenomena that we seek. It may be that modern physics only concerns “knowledge that,” and not “knowledge why.” The data do not provide their own explanation any more than a text reads itself. Historically, physics benefitted from a license to draw on other disciplines of human inquiry to form a coherent world-conception. René Descartes famously compared Philosophy to a tree: “The roots are metaphysics, the trunk is physics, and the branches emerging from the trunk are all the other sciences.” By “metaphysics,” Descartes means first principles which cannot themselves be empirically verified but which provide a basis for empirical verification. Being, space, time, identify, non-contradiction, causality and God are examples of such first principles. It should be clear that, though not every one of these principles is necessary for science, it is just as true that science would be inconceivable without some of them. With the scientific revolution in the seventeenth century, physics gradually began to distance itself from its metaphysical roots. Nevertheless, a tacit metaphysical inheritance continued to ground new advances in science. Thus, physicists were able to draw their premises, more or less consciously, from religious and philosophical traditions of the past to ground their conclusions. Isaac Newton, for instance, invoked the monotheistic notion of a supreme being to lend lawfulness and sensibility to the cosmos. Thus in the General Scholium, he famously declares:
This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent Being…This Being governs all things, not as the soul of the world, but as Lord over all; and on account of his dominion he is wont to be called “Lord God” παντοκρατωρ [pantokratōr], or “Universal Ruler”…The Supreme God is a Being eternal, infinite, [and] absolutely perfect.
Gradually, however, the success of the sciences encouraged physicists to assert their discipline’s independence of its metaphysical roots. Thus the centuries following the scientific revolution saw increasing attempts to account for experimental results without appealing to anything outside of physics. Thus by 1796, the physicist Pierre Laplace could reply to Napoleon Bonaparte’s question as to the role of the Creator in the universe that the former had proposed in his Exposition du systeme du monde with the assertive reply: “Je n’avais pas besoin de cette hypothèse-là.” (“I had no need of that hypothesis.”)
The late physicist Stephen Hawking offers an insight into the contemporary perspective in his 2010 book The Grand Design, in which he definitively rejected the notion that any appeal to principles outside of physics was necessary to explain the origin of the physical world. “Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist,” he wrote in, and concluded, “It is not necessary to invoke God to light the blue touch paper and set the universe going.” Hawking clarified his statement in an interview in March 2018 (https://www.elmundo.es/ciencia/2014/09/21/541dbc12ca474104078b4577.html): “Before we understand science, it is natural to believe that God created the universe. But now science offers a more convincing explanation.” The explanation to which Hawking refers is of course “spontaneous creation,” which strikes one as a feasible explanation until one recognises that it is no explanation at all. Everything must initially strike us as spontaneous until we have understood it. In other words, spontaneous creation cannot itself be a reason, or an explanation, because that is precisely the thing we seek a reason, or an explanation, for. This is a simple fact about the nature of scientific knowledge, and a return to the subject of quantum mechanics will render it immediately apparent. The quest for a coherent interpretation of quantum mechanics is just to explain why the phenomena that the Schrödinger equations and matrix mechanics describe, behave in the manner that they in fact do.
As we indicated above, Bohr and Heisenberg attempted to do exactly this with the Copenhagen interpretation. In order to explain such surprising phenomena as the wave-function collapse, and the interference pattern in the double slit experiment, the Copenhagen interpretation posits that particles as such have no definite location until an actual observation. Instead, they subsist in superposition, which is described as being “smeared out” across space along the parameters of the wave-function, until the act of observation collapses the wave-function and the particle assumes a definite position. The apparent influence of observation on physical reality the has been called the “observer effect.” By now in our exploration, we have learned we cannot expect to achieve an understanding of quantum mechanics without thinking about it. Let us therefore think about whether “the observer effect” actually means what it suggests. What is the observer affecting? The standard answer is that the observer collapses the wave-function, or “forces” the particle to assume a definite location. To reiterate from above, the wave-function is a mathematical calculation to describe, from given initial conditions, the behaviour of particles at any time during an experiment. As the wave-function indicates, subatomic particles like photons and electrons follow a wave-like pattern. If a particle is observed, however, the continuous wave-function instantly collapses and gives way to a discreet particle with definite location. Let us here distinguish between the wave-function as a description of potential outcomes to an eventual measurement, and an actual observation that yields an actual measurement. The source of confusion can now begin to reveal itself, since potentiality and actuality are not substantially commensurate. The wave-function is not a physical wave, it is a wave of possibility. “The observer effect,” in its application to interpretations of quantum mechanics, is an expression of the mistake of regarding a probability wave—a wave in abstract, non-physical Hilbert space—in the same manner as the observed particle in actual physical space.
From this realisation, it should follow as self-evident why the “mere” act of observing a particle collapses the wave-function. Again, the wave-function is a statistical description of possible outcomes. Thus it is a description of the experimenter’s knowledge of the entire system. Obviously, an actual observation affects this knowledge by fixing the possible outcomes to a specific one. From initial conditions, a weather forecast might predict a 40% chance of rain in a given location, but that is not the same as it actually raining there. To actually observe rain instantaneously resolves the probability of precipitation from .4 to 1, but no one recognises “the observer effect” in this phenomenon, nor supposes that before the observation, the weather was in a state of superprecipitation.
No longer does the Copenhagen interpretation’s assertion of ontological nebulosity appear like an altogether satisfactory conclusion to draw. Let us inquire further, however, for there may be other reasons for it. Another riddle of quantum mechanics that lent credence to the notion of ontological indefiniteness was the discovery that the position of an electron could not be measured at the same time as its momentum. Heisenberg postulated his “uncertainty principle,” and Bohr the principle of “complementarity” in response to this fact. Because the uncertainty principle is not a function of the measuring device, but of the equations themselves, the future appeared to offer no promise of its resolution with technological advances of the equipment. This appeared to some to support the conclusion that “reality” is fundamentally uncertain. Bohr, for instance, when Erwin Schrödinger attempted to challenge this ontological assertion of uncertainty with his famous cat-paradox, maintained that the cat was actually in a superposition of life-death until the box was opened. As we discovered in our investigation of the observer effect, however, no observational data supports such a conclusion as Bohr felt compelled to draw. Indeed one cannot help but conclude that Bohr’s reasoning was driven by motives ulterior to the scientific nature of the situation. Simply stated, momentum implies a changing—which is to say, a non-definite—position. The former implies the latter by the very meaning of those respective concepts. This conceptual complementarity is expressed mathematically as the fact that momentum is proportional to the time-derivative of position. In other words, momentum is calculated by the rate by which the position changes: if the position is fixed, then its time-derivate cannot be calculated and neither can its momentum. Accurate knowledge of an electron’s position precludes accurate knowledge of its momentum, and the converse. In other words, certainty about one aspect condemns one to uncertainty about the other. To extrapolate this epistemological fact into a metaphysical principle strikes one as distinctly unwarranted by the current evidence.
What does seem safe to extrapolate from the current evidence is that the disagreement over interpretations of the quantum experiments will never be resolved by the methods of modern physics. For anyone who considers the nature of the disagreement and the history of modern physics, the fact that physicists have reached no consensus in an hundred years will not come as a surprise. Indeed, the only semblance of progress comes about when physicists smuggle in metaphysical principles that are sufficiently naturalised in the contemporary Zeitgeist so as to be unnoticed. Science deserves more than back-door philosophy, however, which is all that it will receive as long as physicists insist on sustaining the illusion that their discipline can provide its own foundations. The Copenhagen interpretation distinctly undermines its own foundations with the assertion that particle subsists in a superposition of mutually exclusive states. Such an assertion flouts the principle of non-contradiction upon which the very equations that were its source depend. Nobody would use wool to deny sheep and nobody should make inferences that contradict the methods from which those inferences were drawn. Specifically, an interpretation that rejects the principles of the mathematics that underly the experiments which the interpretations are meant to explain is an interpretation that has sawed off the branch on which it had meant to perch.
Mercifully, Heisenberg himself offered a path towards resolution of the quantum muddle, revealing that he could draw from the roots that support his discipline when he was not being bullied into adopting Bohr’s idiosyncratic ontological notions. In his 1958 work Physics and Philosophy, Heisenberg characterises the nature of the situation in a clear manner:
Thus we encounter the name of Aristotle for the third time in this exploration. Heisenberg did not pursue this notion further, and perhaps it is not his role. His contributions to human knowledge are sufficient to warrant our admiration regardless of whether he fully elaborated their philosophical implications. It may be that Heisenberg left the clue to resolve this riddle, however, with his mention of Aristotle in the quote above. Aside from the characterisations of knowledge and innate wonder that we made use of earlier in this exploration, could it be that Aristotle also offers us conceptual framework to understand the natural world? This seems to be just what Heisenberg suggested. In the specific context of this inquiry, we can make use of Aristotle’s reciprocal notions of dynamis (δύναμις) & energeia (ἐνέργεια). Latin translators during the Middle Ages rendered these terms as “potentia” and “actualitas,” respectively. Indeed this is the phrase that was familiar to Heisenberg. Each of these four words has at least one cognate in English: “dynamite,” “energy,” “potential,” “actual.” Dynamis denotes a specific power while energeia refers to the active working of that power, it’s “being-at-work.”** The concpets of potential and kinetic energy exemplify, but do not exhaust, the meaning of dynamis and energeia. Potential-kinetic energy is a species of which dynamis-energeia is the genus.
It is consummately revealing, in light of our exploration till now, to recognise that modern physics has entirely collapsed Aristotle’s distinction between dynamis and energeia, or potential and actual.†† Thus “energy” is defined irrespective of its actuality. Specifically, in modern physics, “energy” is understood as a conserved quantity of the capacity to do work, expressed in a relation of mass × distance²/time² (MD²/T²) or the like. E=MC² is an example of this relation since the universal constant is meant to represent the speed of causality and, as a velocity, is thus measured in units of distance divided by time. Obviously a system of thought that fails to distinguish between potential and actual cases is not a system of thought that can offer any definite insight into the concrete nature of its object, only probable knowledge. The fruition of actual knowledge from probabilistic knowledge could only be the result of an actual observation of an actual case. Anyone who supposed that probability could serve the same function as actuality has simply misunderstood the nature of both concepts.
Nevertheless, misunderstandings of just this nature haunt the domain of science today, as we have attempted to illustrate in respect to the field of quantum mechanics with the present investigation. The physicist Richard Feynman summed up the general situation today in an elegant manner when he stated, “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” One cannot help but conclude that attempts to interpret the findings of quantum mechanics suffer from a distinct lack of philosophical foundations. The ability to conceptually distinguish between potential and actual appears to offer promise for the future of this field, since to say “a particle is potentially in state α and potentially in state ~α at the same time” does not flout the principle of non-contraction—a pillar of logic that sustains the very edifice of quantum mathematics. Such an understanding would provide for at once a more nuanced and a more rigorous conception of reality than our present notions seem to provide. It should be obvious that the matrix and wave equations are describing the potential behaviour of particles in an experiment, which is related to, but not identical with, a description of those particles in their actuality. It is my hope that this exploration may contribute in some small way to our understanding of physis.
Sandro Botticelli, The Birth of Venus (c. 1484–86)
Thanks to all of the fathers like Aristotle, Einstein, Bohr, Heisenberg, Schrödinger, etc…and also thanks to many others
* I have chosen to treat the phrase “quantum mechanics” as though it were a singular noun, which it is not, since this seems to have become a convention.
† A black body is an hypothetical ideal physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence and radiates it at the highest possible efficiency and also isotropically.
**Aristotelian Scholar Joe Sachs coined this neologism to capture, in English, the native Greek speaker’s experience of that term.
† † Similarly, physics is a kinematic, not a dynamic, study. In other words, it studies only mathematical motion abstracted from the force of that motion. Along similar lines, David Bohm notes the following, though not without displaying some of the same logical inconsistencies as in other theories:
Now, there’s one other thing that modern quantum mechanics doesn’t handle. Oddly enough, physics at present has no contact with the notion of actuality. You see, classical physics has at least some notion of actuality in saying that actuality consists of a whole collection of particles that are moving and interacting in a certain way. Now, in quantum physics, there is no concept of actuality whatsoever, because quantum physics maintains that its equations don’t describe anything actual, they merely describe the probability of what an observer could see if he had an instrument of a certain kind, and this instrument is therefore supposed to be necessary for the actuality of the phenomenon. But the instrument, in turn, is supposed to be made of similar particles, obeying the same laws, which would, in turn, require another instrument to give them actuality. That would go on an infinite regress. Wigner has proposed to end the regress by saying it is the consciousness of the actual observer that gives actuality to everything.
2 Comments Add yours
1. Thank you for your clarity, Leyf. I am in full agreement with your statement:
“Science deserves more than back-door philosophy, which is all that it will receive as long as physicists insist on sustaining the illusion that their discipline can provide its own foundations.”
Heisenberg’s reference to “potentia” goes a long way toward sorting out the complete and utter philosophical mess that is contemporary quantum theory. This is the path Whitehead elaborated upon in his philosophy of organism.
Other philosophically inclined physicists who would appear to be on the right track here include Lee Smolin. He and Roberto Unger recently published “A Singular Universe and the Reality of Time.” It is not a very readable book, but buried in it are some important arguments attempting to replace mathematics with natural philosophy at the foundation of physics. By putting mathematics at the foundation, 20th century physicists have lost touch with the point of science, which is to provide a coherent account of the natural world of our experience. Mathematics is about pure possibilities and the abstract relations among possibilities. Sometimes it has application for modeling the patterned activities of Nature. Math can thus serve a powerful role in the pursuit of knowledge of Nature, but when it becomes the master, our requests for understanding are met with “shut up and calculate!”
Liked by 1 person
Leave a Reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s |
3432adfd528a4342 | Digit Geek
Digit Geek > Recent Articles > Science > Why is antimatter so rare?
Why is antimatter so rare?
Antimatter incredibly expensive to create, very hard to store, and cannot come in contact with regular matter without disappearing in a flash of gamma rays and neutrinos.
From the everyday to the extraordinary
Isaac Newton and Gottfried Wilhelm Leibniz gave rise to the field of classical mechanics, which described how bodies behaved when various kinds of forces were applied to them. While classical mechanics were good enough to understand the phenomena that most humans experienced on a daily basis for over two hundred years, it fell short when investigating the motions of planets, stars and galaxies, or the subatomic realm. Light has been pretty problematic to understand for science. One of the issues is that waves of light travel through the vacuum of space, but waves technically should be unable to propagate without a medium. In the seventeenth and eighteenth centuries, Scientists such as Robert Boyle, Christiaan Huygens and Isaac Newton all explained the phenomenon by suggesting that there was a pervasive medium throughout space, that allowed light to travel. This medium, invoked just to allow light to travel in vacuum, was dubbed “luminiferous ether”.
Einstein’s Theory of Relativity, proposed early in the 20th century is made up of two related components. These are special relativity which was proposed earlier, and general relativity, which also takes gravity into account. The theory of special relativity showed that time and space cannot be considered as separate entities, and are instead the spacetime continuum. The theory also removed the need for a luminiferous ether for the propagation of light.
While the understanding of how objects interact with each other when different forces are applied to them was good enough for say predicting the motion of planets, there were several problems when it came to studying motion at a subatomic level. The investigations of how particles and energies behave in the subatomic realm by researchers such as Max Planck, Niels Bohr, Werner Heisenberg, Wolfgang Pauli, Erwin Schrödinger, Satyendra Nath Bose, and Albert Einstein gave rise to the field of quantum mechanics. Quantum here, just means very small spaces. The Heisenberg Uncertainty Principle showed that you could either precisely measure the velocity or the position of a particle, say a photon, but never both. The Schrödinger equation allowed for the study of quantum mechanics using probabilities, as all the properties of particles could not be measured with accuracy.
The theory of relativity explained phenomena on a macro level, and the quantum mechanics did the same on a micro scale. Both had weird and wonderful implications that were proven through experiments through the 20th century. Einstein’s special theory of relativity indicated that time is not the same for everyone, that the speed of light is the same no matter how fast the observers are moving, and that there was a limit to how fast anything could possibly travel. Quantum mechanics implied that particles could jump through barriers they should not be able to pass, properties of particles gaining a value at the time of measurement from a range of probabilities, and the existence of several simultaneous, not parallel universes. However, special relativity had not been described in the context of quantum mechanics till Paul Dirac came along, leading to one of the weirdest and most wonderful implication of all – antimatter.
Dirac’s Equation
Within an atom, the proton has as a positive charge while the electron has a negative charge. A shortcoming of quantum mechanics at that time was that it could not explain the motions of subatomic particles at high speeds. Dirac wanted to explain the motion of electrons travelling at speeds close to the speed of light. The equation he came up with to do so, was sufficient enough for his purpose, but had far reaching consequences. Dirac’s equation was consistent with both the theory of relativity as well as quantum mechanics, and is in some ways similar to the Schrödinger equation. Dirac’s equation had an inherent “problem”. Just as quadratic equations have two solutions, or roots, Dirac’s equation also had two solutions. The equation implied that the electron had an antiparticle, with a positive charge instead of a negative charge, now known as a positron.
Carl David Anderson, while studying cosmic rays in a magnetic environment, found traces of a particle that was moving in the opposite direction of the electron, but was too light to be a proton. The new particle was called the positron, and Anderson won the 1936 Nobel Prize in Physics for the discovery. After the discovery, the results of previous studies showed that the positron had been experimentally observed before, but such observations were not followed up, or the particle was assumed to be a proton.
The Bevatron. Image: Berkeley Lab.
Particle accelerators
The idea that every particle had a counterpart with an opposite charge was known as charge symmetry. The Bevatron was a device designed to specifically test the validity of charge symmetry. It was a particle accelerator, that would use electromagnets to propel protons to speeds close to that of light, and then make them collide against stationary protons or neutrons. In 1954, the Bevatron became operational at the Berkeley Lab, operated by the University of California.
Just about a year later, the first antiprotons were observed at the Bevatron. A year after that, the first antineutrons were observed. There now was sufficient evidence for the existence of antiparticles for all three components of the atom. Observations of the antiproton, antineutron and the positron proved the that the notion of charge symmetry was indeed true. If atoms are the building blocks of matter, then antiatoms are the building blocks of antimatter. Now that all the particles that make up an atom were known to have antiparticles, the question was that there was an antiatom.
Two particle accelerators, the proton synchrotron at CERN and the alternating gradient synchrotron at the Brookhaven National Laboratory both detected the first antinuclei in 1965. Deuterium, a stable isotope of Hydrogen with one neutron and one proton. The atomic nuclei detected was an antideuterium. In 1978, researchers at CERN produced antiprotons from the proton synchrotron, and kept them going round in circles in a machine dubbed the Initial Cooling Experiment, or ICE for a period of 85 hours. It was the first time that anyone, anywhere in the world had stored antiprotons. In 1995, thirty years after the first antinuclei was observed, scientists managed to create the first antiatom in a lab environment. To create an antiatom, it was necessary to have low energy antiparticles, which required specialised equipment. The antiparticles created so far were “hot”, with energy levels too high to allow for the creation of antiatoms.
Hydrogen is the simplest atom in the periodic table, and is made up of a proton and an electron. It is the most abundant element in the universe, and comprises nearly 75 percent of the baryonic mass in the universe. Baryonic refers to matter, in other words, the stuff that is not anti-matter. The first antiatom created in a lab environment was the antihydrogen. This was done through an instrument at CERN known as the Low Energy Antiproton Ring. Currently, CERN uses the Antiproton Decelerator to produce antimatter that is of a low enough energy level to allow scientists to study them. In 2003, small quantities of antihelium was produced at Brookhaven National Laboratory. This remains the most complex antiatoms created by humans.
An antihydrogen annihilation event in the ATHENA experiment. The pink silicon microchips indicate four pairs of quarks and antiquarks, the yellow cubes show captured energy and the red line denotes the gamma rays produced by the annihilation. Credit: CERN
Antimatter traps
In 2011, CERN managed to store antimatter for a period of sixteen minutes, long enough to start experimenting on the substance, and unlocking its mysteries.
By now, you should have had a clear idea of why antimatter is so rare. It takes an incredible amount of energy to create antimatter, and to store it. Even after the laborious process, only a few antiatoms are created. A feasibility study by NASA to explore the use of antimatter for fuelling spacecraft, pegged the price of creating antimatter at $62.5 trillion a gram, a figure that estimates by CERN agree with. It takes 25 trillion kWh of electricity to make a single gram of antimatter. CERN has produced less than 10 nanograms of antimatter so far, and has the capability to create about one billionth of a gram every year. CERN would take 1 billion years to produce one gram of antimatter. Antimatter cannot be stored in a normal container, as matter and antimatter mutually annihilate each other on contact, usually producing gamma rays and neutrinos in the process. Researchers do not even know if they have created antimatter or not, and can only confirm the process of creation after detecting the annihilation event.
Antiparticles are stored by suspension in vacuum through electric fields, or if the antimatter has no charge, through superconducting magnets. A penning trap at CERN stored a single antiproton for 57 days, which is the world record for antiparticle storage. Researchers are developing magnetic bottles and optical traps using lasers to store antimatter.
An antimatter trap. Image: CERN.
Antimatter in nature
Conditions similar to that of particle accelerators are created in nature as well, including pulsars and the magnetic fields of celestial bodies. A supermassive black hole accelerates particles in a jet stream that interacts with the gas clouds of two colliding galaxies, Abell 3411 and Abell 3412 to create a natural particle accelerator. Naturally occurring antiparticles have been detected in the Van Allen Belts around Earth. If there are entire galaxies made up of antimatter, they would be indistinguishable from regular galaxies through our telescopes. If such galaxies exist, they would be from the early days of the big bang, near our horizon of visibility.
The radiation from unstable atomic nuclei also produce small quantities of antimatter in some cases. Human beings produce antimatter by eating, drinking and breathing. This is because of the potassium-40 in the body, and a person weighing 80 kg produces about 180 positrons every hour.
The big bang actually created equal amounts of matter and antimatter. Soon after though, most of the antimatter disappeared, leaving a small amount of matter behind, that we can observe. Theoretically, there should be the same amount of matter and antimatter in the universe. Why this is not the case, remains a mystery.
Aditya Madanapalle
Aditya Madanapalle
An avid reader of the magazine, who ended up working at Digit after studying journalism, game design and ancient runes. When not egging on arguments in the Digit forum, can be found playing with LEGO sets meant for 9 to 14-year-olds. |
4d5b0ca98c778f37 | Friday, September 26, 2008
Dark black holes, dark flow, and how to avoid heat death?
Lubos made interesting comments about the calculation of black hole entropy in his blog. I have absolutely nothing to say about this branch of science as far as technicalities are considered. The formulas for blackhole entropy however inspire new visions about black holes if one accepts the hierarchy of Planck constants and the notion of relative darkness in the sense that particles at the different pages of the book like structure, whose pages are labelled by the values of Planck constant are dark relative to each other. I glue below a slightly edited comment in Kea's blog.
1. Black hole entropy and dark black holes
Lubos made in his posting explicit the 1/hbar proportionality of formulas for black hole entropy. This proportionality reflects the basic thermodynamical implication of quantization: the phase space of N-dimensional system decomposes into cells of volume hbarN and entropy is proportional to the phase space volume using this volume as unit. If hbar becomes large and gigantic as it would in the case of dark gravitation (hbar= GM1M2/v0, v0/c∼ 2-11 for inner planetary Bohr orbits) this means that blackhole entropy is extremely small. Black is dark;-) as I realized for few years ago, and it would be interesting to consider the consequences.
2. Hierarchy of Planck lengths
It deserves to be noticed that the rough order of magnitude estimate for the gravitational Planck constant of Sun can be written as hbargr=x4GM2. This gives for the Planck length the expression
LP= (Ghbar)1/2 = x1/2 2GM .
For x=1 one Planck length would be just Schwartshild radius. This makes sense since these two lengths play rather similar role. Quite generally, one would have a hierarchy of Planck lengths.
3. Dark flow
Second comment is related to the earlier posting of Lubos about the observed dark flow in length scales larger than horizon size towards an attractor outside horizon. The presence of the attractor outside the visible universe conforms with the notion of manysheeted space-time predicting also a manysheeted cosmology.
Many-sheeted cosmology means a hierarchy of space-time sheets obeying their own Robertson-Walker type cosmologies: those with varying p-adic length scale and those labelled by various values of Planck constants at pages of book like structure obtained by gluing together singular coverings and factor spaces of 8-D imbedding space (roughly). Particles at different pages are dark relative to each other in the sense that there are no local interaction vertices: classical interactions and those by exchanges of say photons are possible. Each sheet in many-sheeted cosmology has different horizon size.
The attractor would correspond to a different value of Planck constant and have larger horizon size than our sheet. Dark energy would be dark matter and the phase transitions increasing Planck constant would induce phases of accelerated expansion. In average sense these periods would give ordinary cosmology without accelerated expansion.
4. How to avoid heat death?
Third comment relates to the dark flow and implications of the hierarchy of Planck constants to future prospects of intelligent life. Heat death is believed by standard physicists to be waiting for all forms of life. We would live in the silliest possible Universe. I cannot believe this. I am ready to admit that some of our theories about the Universe are really silly, but entire Universe?--No!
The hierarchy of Planck constants would allow to avoid heat death. For instance, if the rate for the reduction of temperature is proportional to 1/hbar -as looks natural- then there is always an infinite number of hierarchy levels for which temperature is above a given temperature since the temperature at these pages is reduced so slowly.
Life can escape to the pages of the Big Book labelled by larger values of Planck constant without breaking second law since the scaling of the size of the system by hbar increases phase space volume and keeps entropy constant. Evolution by quantum leaps increasing hbar increasing the time scale of planned action and long term memory is another manner to say this.
The observed dark flow might be seen as a direct support for this more optimistic view about Life and the Universe and Everything;-).
Tuesday, September 23, 2008
Flyby anomaly as a relativistic transverse Doppler effect?
For half year ago I discussed a model for the flyby anomaly based on the hypothesis that dark matter ring around the orbit of Earth causes the effect. The model reproduced the formula deduced for the change of the velocity of the space-craft at qualitative level, and contained single free parameter: essentially the linear density of the dark matter at the flux tube.
From Lubos I learned about a new twist in the story of flyby anomaly. September twelfth 2007 Jean-Paul Mbelek proposed an explanation of the flyby anomaly as a relativistic transverse Doppler effect. The model predicts also the functional dependence of the magnitude of the effect on the kinematic parameters and the prediction is consistent with the empirical findings in the example considered. Therefore the story of flyby anomaly might be finished and dark matter at the orbit of Earth could bring in only an additional effect. It is probably too much to hope for this kind of effect to be large enough if present.
For background see the chapter TGD and Astrophysics.
Monday, September 22, 2008
Tritium beta decay anomaly and variations in the rates of radioactive processes
1. Fake 3He option
DM = M(3He)-M(3Hef) .
in the approximation mn=0.
2. Fake 3H option
Monday, September 15, 2008
Zero energy ontology, self hierarchy, and the notion of time
In the previous posting I discussed the most recent view about zero energy ontology and p-adicization program. One manner to test the internal consistency of this framework is by formulating the basic notions and problems of TGD inspired quantum theory of consciousness and quantum biology in terms of zero energy ontology. I have discussed these topics already earlier but the more detailed understanding of the role of causal diamonds (CDs) brings many new aspects to the discussion.
In consciousness theory the basic challenges are to understand the asymmetry between positive and negative energies and between two directions of geometric time at the level of conscious experience, the correspondence between experienced and geometric time, and the emergence of the arrow of time. One should also explain why human sensory experience is about a rather narrow time interval of about .1 seconds and why memories are about the interior of much larger CD with time scale of order life time. One should also have a vision about the evolution of consciousness takes place: how quantum leaps leading to an expansion of consciousness take place.
In the following my intention is to demonstrate that TGD inspired theory of consciousness and quantum TGD proper indeed seem to be in tune and that this process of comparison helps considerably in the attempt to develop the TGD based ontology at the level of details.
1 Causal diamonds as correlates for selves
Quantum jump as a moment of consciousness, self as a sequence of quantum jumps integrating to self, and self hierarchy with sub-selves experienced as mental images, are the basic notion of TGD inspired quantum theory of consciousness. In the most ambitious program self hierarchy reduces to a fractal hierarchy of quantum jumps within quantum jumps.
It is natural to interpret CD:s as correlates of selves. CDs can be interpreted in two manners: as subsets of the generalized imbedding space or as sectors of the world of classical worlds (WCW). Accordingly, selves correspond to CD:s of the generalized imbedding space or sectors of WCW, literally separate interacting quantum Universes. The spiritually oriented reader might speak of Gods. Sub-selves correspond to sub-CD:s geometrically. The contents of consciousness of self is about the interior of the corresponding CD at the level of imbedding space. For sub-selves the wave function for the position of tip of CD brings in the delocalization of sub-WCW.
The fractal hierarchy of CDs within CDs defines the counterpart for the hierarchy of selves: the quantization of the time scale of planned action and memory as T(k) = 2kT0 suggest an interpretation for the fact that we experience octaves as equivalent in music experience.
2. Why sensory experience is about so short time interval?
CD picture implies automatically the 4-D character of conscious experience and memories form part of conscious experience even at elementary particle level: in fact, the secondary p-adic time scale of electron is T=1 seconds defining a fundamental time scale in living matter. The problem is to understand why the sensory experience is about a short time interval of geometric time rather than about the entire personal CD with temporal size of order life-time. The obvious explanation would be that sensory input corresponds to sub-selves (mental images) which correspond to CD:s with T(127) @ .1 s (electrons or their Cooper pairs) at the upper light-like boundary of CD assignable to the self. This requires a strong asymmetry between upper and lower light-like boundaries of CD:s.
1. The only reasonable manner to explain the situation seems to be that the addition of CD:s within CD:s in the state construction must always glue them to the upper light-like boundary of CD along light-like radial ray from the tip of the past directed light-cone. This conforms with the classical picture according to which classical sensory data arrives from the geometric past with velocity which is at most light velocity.
2. One must also explain the rare but real occurrence of phase conjugate signals understandable as negative energy signals propagating towards geometric past. The conditions making possible negative energy signals are achieved when the sub-CD is glued to both the past and future directed light-cones at the space-like edge of CD along light-like rays emerging from the edge. This exceptional case gives negative energy signals traveling to the geometric past. The above mentioned basic control mechanism of biology would represent a particular instance of this situation. Negative energy signals as a basic mechanism of intentional action would explain why living matter seems to be so special.
3. Geometric memories would correspond to the lower boundaries of CD:s and would not be in general sharp because only the sub-CD:s glued to both upper and lower light-cone boundary would be present. A temporal sequence of mental images, say the sequence of digits of a phone number, could corresponds to a sequence of sub-CD:s glued to the upper light-cone boundary.
4. Sharing of mental images corresponds to a fusion of sub-selves/mental images to single sub-self by quantum entanglement: the space-time correlate for this could be flux tubes connecting space-time sheets associated with sub-selves represented also by space-time sheets inside their CD:s. It could be that these ëpisodal" memories correspond to CD:s at upper light-cone boundary of CD.
On basis of these arguments it seems that the basic conceptual framework of TGD inspired theory of consciousness can be realized in zero energy ontology. Interesting questions relate to how dynamical selves are.
1. Is self doomed to live inside the same sub-WCW eternally as a lonely god? This question has been already answered: there are interactions between sub-CD:s of given CD, and one can think of selves as quantum superposition of states in CD:s with wave function having as its argument the tips of CD, or rather only the second one since T is assumed to be quantized.
2. Is there a largest CD in the personal CD hierarchy of self in an absolute sense? Or is the largest CD present only in the sense that the contribution to the contents of consciousness coming from very large CD:s is negligible? Long time scales T correspond to low frequencies and thermal noise might indeed mask these contributions very effectively. Here however the hierarchy of Planck constants and generalization of the imbedding space would come in rescue by allowing dark EEG photons to have energies above thermal energy.
3. Can selves evolve in the sense that the size of CD increases in quantum leaps so that the corresponding time scale T=2kT0 of memory and planned action increases? Geometrically this kind of leap would mean that CD becomes a sub-CD of a larger CD either at the level of conscious experience or in absolute sense. This leap can occur in two senses: as an increase of the largest p-adic time scale in the personal hierarchy of space-time sheets or as increase of the largest value of Planck constants in the personal dark matter hierarchy. At the level of individual this would mean emergence of increasingly lower frequencies of generalization of EEG and of the levels of dark matter hierarchy with large value of Planck constant.
4. In 2-D illustration of the leap leading to a higher level of self hierarchy would mean simply the continuation of CD to right or left in the 2-D visualization of CD. Since the preferred M2 is contained in the tangent space of space-time surfaces, and since preferred M2 plays a key role in dark matter hierarchy too, one must ask whether the 2-D illustration might have some deeper truth in it.
3. New view about arrow of time
Perhaps the most fundamental problem related to the notion of time concerns the relationship between experienced time and geometric time. The two notions are definitely different: think only the irreversibility of experienced time and the reversibility of the geometric time and the absence of future of the experienced time. Also the deterministic character of the dynamics in geometric time is in conflict with the notion of free will supported by the direct experience.
In the standard materialistic ontology experienced time and geometric time are identified. In the naivest picture the flow of time is interpreted in terms of the motion of 3-D time=constant surface of space-time towards geometric future without any explanation for why this kind of motion would occur. This identification is plagued by several difficulties. In special relativity the difficulties relate to the impossibility define the notion of simultaneity in a unique manner and the only possible manner to save this notion seems to be the replacement of time=constant 3-surface with past directed light-cone assignable to the world-line of observer. In general relativity additional difficulties are caused by the general coordinate invariance unless one generalizes the picture of special relativity: problems are however caused by the fact that past light-cones make sense only locally. In quantum physics quantum measurement theory leads to a paradoxical situation since the observed localization of the state function reduction to a finite space-time volume is in conflict with the determinism of Schrödinger equation.
1. Selves correspond to CD:s the own sub-WCW:s. These sub-WCW:s and their projections to the imbedding space do not move anywhere. Therefore standard explanation for the arrow of geometric time cannot work. Neither can the experience about flow of time correspond to quantum leaps increasing the size of the largest CD contributing to the conscious experience of self.
2. The only plausible interpretation is based on quantum classical correspondence and the fact that space-times are 4-surfaces of the imbedding space. If quantum jump corresponds to a shift of quantum superposition of space-time sheets towards geometric past in the first approximation (as quantum classical correspondence suggests), one can indeed understand the arrow of time. Space-time surfaces simply shift backwards with respect to the geometric time of the imbedding space and therefore to the 8-D perceptive field defined by the CD. This creates in the materialistic mind a kind of temporal variant of train illusion. Space-time as 4-surface and macroscopic and macro-temporal quantum coherence are absolutely essential for this interpretation to make sense.
Why this shifting should always take place to the direction of geometric past of the imbedding space? What seems clear is that the asymmetric construction of zero energy states should correlate with the preferred direction. If question is about probabilities, the basic question would be why the probabilities for shifts in the direction of geometric past are higher. Here some alternative attempts to answer this question are discussed.
1. Cognition and time relate to each other very closely and the required fusion of real physics with various p-adic physics of cognition and intentionality could also have something to do with the asymmetry. Indeed, in the p-adic sectors the transcendental values of p-adic light-cone proper time coordinate correspond to literally infinite values of the real valued light-cone proper time, and one can say that most points of p-adic space-time sheets serving as correlates of thoughts and intentions reside always in the infinite geometric future in the real sense. Therefore cognition and intentionality would break the symmetry between positive and negative energies and geometric past and future, and the breaking of arrow of geometric time could be seen as being induced by intentional action and also due to the basic aspects of cognitive experience.
2. Zero energy ontology suggests also a possible reason for the asymmetry. Standard quantum mechanics encourages the identification of the space of negative energy states as the dual for the space of positive energy states. There are two kinds of duals. Hilbert space dual is identified as the space of continuous linear functionals from Hilbert space to the coefficient field and is isometrically anti-isomorphic with the Hilbert space. This justifies the bra-ket notation. In the case of vector space the relevant notion is algebraic dual. Algebraic dual can be identified as an infinite direct product of the coefficient field identified as a 1-dimensional vector space. Direct product is defined as the set of functions from an infinite index set I to the disjoint union of infinite number of copies of the coefficient field indexed by I. Infinite-dimensional vector space corresponds to infinite direct sum consisting of functions which are non-vanishing for a finite number of indices only. Hence vector space dual in infinite-dimensional case contains much more states than the vector space and does not have enumerable basis.
If negative energy states correspond to a subspace of vector space dual containing Hilbert space dual, the number of negative energy states is larger than the number of positive energy states. This asymmetry could correspond to better measurement resolution at the upper light-cone cone boundary so that the state space at lower light-cone boundary would be included via inclusion of HFFs to that associated with the upper light-cone boundary. Geometrically this would mean the possibility to glue to the upper light-cone boundary CD which can be smaller than those associated with the lower one.
3. The most convincing candidate for an answer comes from consciousness theory. One must understand also why the contents of sensory experience is concentrated around a narrow time interval whereas the time scale of memories and anticipation are much longer. The proposed mechanism is that the resolution of conscious experience is higher at the upper boundary of CD. Since zero energy states correspond to light-like 3-surfaces, this could be a result of self-organization rather than a fundamental physical law.
1. The key assumption is that CDs have CDs inside CDs and that the vertices of generalized Feynman diagrams are contained within sub-CDs. It is not assumed that CDs are glued to the upper boundary of CD since the arrow of time results from self organization when the distribution of sub-CDs concentrates around the upper boundary of CD. In a category theoretical formulation for generalized Feynman diagrammatics based on this picture is developed.
2. CDs define the perceptive field for self. Selves are curious about the space-time sheets outside their perceptive field in the geometric future (relative notion) of the imbedding space and perform quantum jumps tending to shift the superposition of the space-time sheets to the direction of geometric past (past defined as the direction of shift!). This creates the illusion that there is a time=snapshot front of consciousness moving to geometric future in fixed background space-time as an analog of train illusion.
3. The fact that news come from the upper boundary of CD implies that self concentrates its attention to this region and improves the resolutions of sensory experience and quantum measurement here. The sub-CD:s generated in this manner correspond to mental images with contents about this region. As a consequence, the contents of conscious experience, in particular sensory experience, tend to be about the region near the upper boundary.
4. This mechanism in principle allows the arrow of the geometric time to vary and depend on p-adic length scale and the level of dark matter hierarchy. The occurrence of phase transitions forcing the arrow of geometric time to be same everywhere are however plausible for the reason that the lower and upper boundaries of given CD must possess the same arrow of geometric time.
For details see chapters TGD as a Generalized Number Theory I: p-Adicization Program.
Sunday, September 14, 2008
The most recent vision about zero energy ontology and p-adicization
The generalization of the number concept obtained by fusing real and p-adics along rationals and common algbraics is the basic philosophy behind p-adicization. This however requires that it is possible to speak about rational points of the imbedding space and the basic objection against the notion of rational points of imbedding space common to real and various p-adic variants of the imbedding space is the necessity to fix some special coordinates in turn implying the loss of a manifest general coordinate invariance. The isometries of the imbedding space could save the situation provided one can identify some special coordinate system in which isometry group reduces to its discrete subgroup. The loss of the full isometry group could be compensated by assuming that WCW is union over sub-WCW:s obtained by applying isometries on basic sub-WCW with discrete subgroup of isometries.
The combination of zero energy ontology realized in terms of a hierarchy causal diamonds and hierarchy of Planck constants providing a description of dark matter and leading to a generalization of the notion of imbedding space suggests that it is possible to realize this dream. The article TGD: What Might be the First Principles? provides a brief summary about recent state of quantum TGD helping to understand the big picture behind the following considerations.
1. Zero energy ontology briefly
1. The basic construct in the zero energy ontology is the space CD×CP2, where the causal diamond CD is defined as an intersection of future and past directed light-cones with time-like separation between their tips regarded as points of the underlying universal Minkowski space M4. In zero energy ontology physical states correspond to pairs of positive and negative energy states located at the boundaries of the future and past directed light-cones of a particular CD. CD:s form a fractal hierarchy and one can glue smaller CD:s within larger CD along the upper light-cone boundary along a radial light-like ray: this construction recipe allows to understand the asymmetry between positive and negative energies and why the arrow of experienced time corresponds to the arrow of geometric time and also why the contents of sensory experience is located to so narrow interval of geometric time. One can imagine evolution to occur as quantum leaps in which the size of the largest CD in the hierarchy of personal CD:s increases in such a manner that it becomes sub-CD of a larger CD. p-Adic length scale hypothesis follows if the values of temporal distance T between tips of CD come in powers of 2n. All conserved quantum numbers for zero energy states have vanishing net values. The interpretation of zero energy states in the framework of positive energy ontology is as physical events, say scattering events with positive and negative energy parts of the state interpreted as initial and final states of the event.
2. In the realization of the hierarchy of Planck constants CD×CP2 is replaced with a Cartesian product of book like structures formed by almost copies of CD:s and CP2:s defined by singular coverings and factors spaces of CD and CP2 with singularities corresponding to intersection M2ÇCD and homologically trivial geodesic sphere S2 of CP2 for which the induced Kähler form vanishes. The coverings and factor spaces of CD:s are glued together along common M2ÇCD. The coverings and factors spaces of CP2 are glued together along common homologically non-trivial geodesic sphere S2. The choice of preferred M2 as subspace of tangent space of X4 at all its points and having interpretation as space of non-physical polarizations, brings M2 into the theory also in different manner. S2 in turn defines a subspace of the much larger space of vacuum extremals as surfaces inside M4×S2.
3. Configuration space (the world of classical worlds, WCW) decomposes into a union of sub-WCW:s corresponding to different choices of M2 and S2 and also to different choices of the quantization axes of spin and energy and and color isospin and hyper-charge for each choice of this kind. This means breaking down of the isometries to a subgroup. This can be compensated by the fact that the union can be taken over the different choices of this subgroup.
4. p-Adicization requires a further breakdown to discrete subgroups of the resulting sub-groups of the isometry groups but again a union over sub-WCW:s corresponding to different choices of the discrete subgroup can be assumed. Discretization relates also naturally to the notion of number theoretic braid.
Consider now the critical questions.
1. Very naively one could think that center of mass wave functions in the union of sectors could give rise to representations of Poincare group. This does not conform with zero energy ontology, where energy-momentum should be assignable to say positive energy part of the state and where these degrees of freedom are expected to be pure gauge degrees of freedom. If zero energy ontology makes sense, then the states in the union over the various copies corresponding to different choices of M2 and S2 would give rise to wave functions having no dynamical meaning. This would bring in nothing new so that one could fix the gauge by choosing preferred M2 and S2 without losing anything. This picture is favored by the interpretation of M2 as the space of longitudinal polarizations.
2. The crucial question is whether it is really possible to speak about zero energy states for a given sector defined by generalized imbedding space with fixed M2 and S2. Classically this is possible and conserved quantities are well defined. In quantal situation the presence of the lightcone boundaries breaks full Poincare invariance although the infinitesimal version of this invariance is preserved. Note that the basic dynamical objects are 3-D light-like "legs" of the generalized Feynman diagrams.
2. Definition of energy inzero energy ontology
Can one then define the notion of energy for positive and negative energy parts of the state? There are two alternative approaches depending on whether one allows or does not allow wave-functions for the positions of tips of light-cones.
Consider first the naive option for which four momenta are assigned to the wave functions assigned to the tips of CD:s.
1. The condition that the tips are at time-like distance does not allow separation to a product but only following kind of wave functions
Ψ = exp(ip·m)Θ(m2) Θ(m0)× Φ(p) , m=m+-m-.
Here m+ and m- denote the positions of the light-cones and Q denotes step function. F denotes configuration space spinor field in internal degrees of freedom of 3-surface. One can introduce also the decomposition into particles by introducing sub-CD:s glued to the upper light-cone boundary of CD.
2. The first criticism is that only a local eigen state of 4-momentum operators p± = (h/2p) Ñ/i is in question everywhere except at boundaries and at the tips of the CD with exact translational invariance broken by the two step functions having a natural classical interpretation. The second criticism is that the quantization of the temporal distance between the tips to T = 2kT0 is in conflict with translational invariance and reduces it to a discrete scaling invariance.
The less naive approach relies of super conformal structures of quantum TGD assumes fixed value of T and therefore allows the crucial quantization condition T=2kT0.
1. Since light-like 3-surfaces assignable to incoming and outgoing legs of the generalized Feynman diagrams are the basic objects, can hope of having enough translational invariance to define the notion of energy. If translations are restricted to time-like translations acting in the direction of the future (past) then one has local translation invariance of dynamics for classical field equations inside dM4± as a kind of semigroup. Also the M4 translations leading to interior of X4 from the light-like 2-surfaces surfaces act as translations. Classically these restrictions correspond to non-tachyonic momenta defining the allowed directions of translations realizable as particle motions. These two kinds of translations have been assigned to super-canonical conformal symmetries at dM4±×CP2 and and super Kac-Moody type conformal symmetries at light-like 3-surfaces. Equivalence Principle in TGD framework states that these two conformal symmetries define a structure completely analogous to a coset representation of conformal algebras so that the four-momenta associated with the two representations are identical .
2. The condition selecting preferred extremals of Kähler action is induced by a global selection of M2 as a plane belonging to the tangent space of X4 at all its points . The M4 translations of X4 as a whole in general respect the form of this condition in the interior. Furthermore, if M4 translations are restricted to M2, also the condition itself - rather than only its general form - is respected. This observation, the earlier experience with the p-adic mass calculations, and also the treatment of quarks and gluons in QCD encourage to consider the possibility that translational invariance should be restricted to M2 translations so that mass squared, longitudinal momentum and transversal mass squared would be well defined quantum numbers. This would be enough to realize zero energy ontology. Encouragingly, M2 appears also in the generalization of the causal diamond to a book-like structure forced by the realization of the hierarchy of Planck constant at the level of the imbedding space.
3. That the cm degrees of freedom for CD would be gauge like degrees of freedom sounds strange. The paradoxical feeling disappears as one realizes that this is not the case for sub-CDs, which indeed can have non-trivial correlation functions with either upper or lower tip of the CD playing a role analogous to that of an argument of n-point function in QFT description. One can also say that largest CD in the hierarchy defines infrared cutoff.
3. p-Adic variants of the imbedding space
Consider now the construction of p-adic variants of the imbedding space.
1. Rational values of p-adic coordinates are non-negative so that light-cone proper time a4,+=Ö(t2-z2-x2-y2) is the unique Lorentz invariant choice for the p-adic time coordinate near the lower tip of CD. For the upper tip the identification of a4 would be a4,-=Ö((t-T)2-z2-x2-y2). In the p-adic context the simultaneous existence of both square roots would pose additional conditions on T. For 2-adic numbers T=2nT0, n ³ 0 (or more generally T=åk ³ n0bk 2k), would allow to satisfy these conditions and this would be one additional reason for T=2nT0 implying p-adic length scale hypothesis. The remaining coordinates of CD are naturally hyperbolic cosines and sines of the hyperbolic angle h±,4 and cosines and sines of the spherical coordinates q and f.
2. The existence of the preferred plane M2 of un-physical polarizations would suggest that the 2-D light-cone proper times a2,+ = Ö(t2-z2) a2,- = Ö((t-T)2-z2) can be also considered. The remaining coordinates would be naturally h±,2 and cylindrical coordinates (r,f).
3. The transcendental values of a4 and a2 are literally infinite as real numbers and could be visualized as points in infinitely distant geometric future so that the arrow of time might be said to emerge number theoretically. For M2 option p-adic transcendental values of r are infinite as real numbers so that also spatial infinity could be said to emerge p-adically.
4. The selection of the preferred quantization axes of energy and angular momentum unique apart from a Lorentz transformation of M2 would have purely number theoretic meaning in both cases. One must allow a union over sub-WCWs labeled by points of SO(1,1). This suggests a deep connection between number theory, quantum theory, quantum measurement theory, and even quantum theory of mathematical consciousness.
5. In the case of CP2 there are three real coordinate patches involved . The compactness of CP2 allows to use cosines and sines of the preferred angle variable for a given coordinate patch.
ξ1= tan(u)× cos(Θ/2)× exp(i(Ψ+Φ)/2) ,
ξ2= tan(u)× sin(Θ/2)× exp(i(Ψ-Φ)/2).
The ranges of the variables u,Q, F,Y are [0,p/2],[0,p],[0,4p],[0,2p] respectively. Note that u has naturally only the positive values in the allowed range. S2 corresponds to the values F = Y = 0 of the angle coordinates.
6. The rational values of the (hyperbolic) cosine and sine correspond to Pythagorean triangles having sides of integer length and thus satisfying m2 = n2+r2 (m2=n2-r2). These conditions are equivalent and allow the well-known explicit solution . One can construct a p-adic completion for the set of Pythagorean triangles by allowing p-adic integers which are infinite as real integers as solutions of the conditions m2=r2±s2. These angles correspond to genuinely p-adic directions having no real counterpart. Hence one obtains p-adic continuum also in the angle degrees of freedom. Algebraic extensions of the p-adic numbers bringing in cosines and sines of the angles p/n lead to a hierarchy increasingly refined algebraic extensions of the generalized imbedding space. Since the different sectors of WCW directly correspond to correlates of selves this means direct correlation with the evolution of the mathematical consciousness. Trigonometric identities allow to construct points which in the real context correspond to sums and differences of angles.
7. Negative rational values of the cosines and sines correspond as p-adic integers to infinite real numbers and it seems that one use several coordinate patches obtained as copies of the octant (x ³ 0,y ³ 0,z ³ 0,). An analogous picture applies in CP2 degrees of freedom.
8. The expression of the metric tensor and spinor connection of the imbedding in the proposed coordinates makes sense as a p-adic numbers in the algebraic extension considered. The induction of the metric and spinor connection and curvature makes sense provided that the gradients of coordinates with respect to the internal coordinates of the space-time surface belong to the extensions. The most natural choice of the space-time coordinates is as subset of imbedding space-coordinates in a given coordinate patch. If the remaining imbedding space coordinates can be chosen to be rational functions of these preferred coordinates with coefficients in the algebraic extension of p-adic numbers considered for the preferred extremals of Kähler action, then also the gradients satisfy this condition. This is highly non-trivial condition on the extremals and if it works might fix completely the space of exact solutions of field equations. Space-time surfaces are also conjectured to be hyper-quaternionic , this condition might relate to the simultaneous hyper-quaternionicity and Kähler extremal property. Note also that this picture would provide a partial explanation for the decomposition of the imbedding space to sectors dictated also by quantum measurement theory and hierarchy of Planck constants.
4. p-Adic variants for the sectors of WCW
One can also wonder about the most general definition of the p-adic variants of the sectors of the world of classical worlds.
1. The restriction of the surfaces in question to be expressible in terms of rational functions with coefficients which are rational numbers of belong to algebraic extension of rationals means that the world of classical worlds can be regarded as a a discrete set and there would be no difference between real and p-adic worlds of classical worlds: a rather unexpected conclusion.
2. One can of course whether one should perform completion also for WCWs. In real context this would mean completion of the rational number valued coefficients of a rational function to arbitrary real coefficients and perhaps also allowance of Taylor and Laurent series as limits of rational functions. In the p-adic case the integers defining rational could be allowed to become p-adic transcendentals infinite as real numbers. Also now also Laurent series could be considered.
3. In this picture there would be close analogy between the structure of generalized imbedding space and WCW. Different WCW:s could be said to intersect in the space formed by rational functions with coefficients in algebraic extension of rationals just real and p-adic variants of the imbedding space intersect along rational points. In the spirit of algebraic completion one might hope that the expressions for the various physical quantities, say the value of Kähler action, Kähler function, or at least the exponent of Kähler function (at least for the maxima of Kähler function) could be defined by analytic continuation of their values from these sub-WCW to various number fields. The matrix elements for p-adic-to-real phase transitions of zero energy states interpreted as intentional actions could be calculated in the intersection of real and p-adic WCW:s by interpreting everything as real.
Wednesday, September 03, 2008
Dark nuclear strings as analogs of DNA-, RNA- and amino-acid sequences and baryonic realization of genetic code
In the earlier posting I considered the possibility that the evolution of genome might not be random but be controlled by magnetic body and that various DNA sequences might be tested in the virtual world made possible by the virtual counterparts of bio-molecules realized in terms of the homeopathic mechanism as it is understood in TGD framework. The minimal option is that virtual DNA sequences have flux tube connections to the lipids of the cell membrane so that their quality as hardware of tqc can be tested but that there is no virtual variant of transcription and translation machinery. One can however ask whether also virtual amino-acids could be present and whether this could provide deeper insights to the genetic code.
1. Water molecule clusters are not the only candidates for the representatives of linear molecules. An alternative candidate for the virtual variants of linear bio-molecules are dark nuclei consisting of strings of scaled up dark variants of neutral baryons bound together by color bonds having the size scale of atom, which I have introduced in the model of cold fusion and plasma electrolysis both taking place in water environment. Colored flux tubes defining braidings would generalize this picture by allowing transversal color magnetic flux tube connections between these strings.
2. Baryons consist of 3 quarks just as DNA codons consist of three nucleotides. Hence an attractive idea is that codons correspond to baryons obtained as open strings with quarks connected by two color flux tubes. The minimal option is that the flux tubes are neutral. One can also argue that the minimization of Coulomb energy allows only neutral dark baryons. The question is whether the neutral dark baryons constructed as string of 3 quarks using neutral color flux tubes could realize 64 codons and whether 20 aminoacids could be identified as equivalence classes of some equivalence relation between 64 fundamental codons in a natural manner.
The following model indeed reproduces the genetic code directly from a model of dark neutral baryons as strings of 3 quarks connected by color flux tubes.
1. Dark nuclear baryons are considered as a fundamental realization of DNA codons and constructed as open strings of 3 dark quarks connected by two colored neutral flux tubes. DNA sequences would in turn correspond to sequences of dark baryons. It is assumed that the net charge of the dark baryons vanishes so that Coulomb repulsion is minimized.
2. One can classify the states of the open 3-quark string by the total charges and spins associated with 3 quarks and to the two color bonds. Total em charges of quarks vary in the range ZB Î {2,1,0,-1} and total color bond charges in the range Zb Î {2,1,0,-1,-2}. Only neutral states are allowed. Total quark spin projection varies in the range JB=3/2,1/2,-1/2,-3/2 and the total flux tube spin projection in the range Jb = 2,1,-1,-2. If one takes for a given total charge assumed to be vanishing one representative from each class (JB,Jb), one obtains 4×5=20 states which is the number of amino-acids. Thus genetic code might be realized at the level of baryons by mapping the neutral states with a given spin projection to single representative state with the same spin projection.
3. The states of dark baryons in quark degrees of freedom can be constructed as representations of rotation group and strong isospin group. The tensor product 2Ä2Ä2 is involved in both cases. Physically it is known that only representations with isospin 3/2 and spin 3/2 (D resonance) and isospin 1/2 and spin 1/2 (proton and neutron) are realized. Spin statistics problem forced to introduce quark color (this means that one cannot construct the codons as sequences of 3 nucleons!).
4. Second nucleon spin doublet has wrong parity. Using only 4Å2 for rotation group would give degeneracies (1,2,2,1). One however requires the representations 4Å2Å2 rather than only 4Å2 to get 8 states with a given charge. One should transform the wrong parity doublet to positive parity doublet somehow. Since open string geometry breaks rotational symmetry to a subgroup of rotations acting along the direction of the string, the attractive possible is add a stringy excitation with angular momentum projection L=-1 to the wrong parity doublet so that parity comes out correctly. This would give degeneracies (1,2,3,2).
5. In flux tube degrees of freedom the situation is analogous to construction of mesons from quarks and antiquarks and one obtains pion with spin 0 and r meson with spin 1. States of zero charge correspond to the tensor product 2Ä2=3Å1 for rotation group. Drop the singlet and take only the analog of neutral r meson. The tensor product 3Ä3=5Å3Å1 gives 8+1 states and leaving only spin 2 and spin 1 states gives 8 states. The degeneracies of states with given spin projection for 5Å3 are (1,2,2,2,1). Genetic code means projection of the states of 5Å3 to those of 5 with the same spin projection.
6. Genetic code maps of ( 4Å2Å2)Ä(5Å3) to the states of 4×5. The most natural map maps the states with given spin to state with same spin so that the code is unique. This would give the degeneracies D(k) as products of numbers DB Î {1,2,3,2} and Db Î {1,2,2,2,1}. The numbers N(k) of aminoacids coded by D(k) codons would be
[N(1),N(2),N(3),N(4),N(6)]=[2,7,2,6,3] .
The correct numbers for vertebrate nuclear code are (N(1),N(2),N(3),N(4),N(6)) = (2,9,1,5,3). Some kind of symmetry breaking must take place and should relate to the emergence of stopping codons. If one codon in second 3-plet becomes stopping codon, 3-plet becomes doublet. If 2 codons in 4-plet become stopping codons it also becomes doublet and one obtains the correct result (2,9,1,5,3)!
The conclusion is that genetic code can be understand as a map of stringy baryonic states induced by the projection of all states with same spin projection to a representative state with same spin projection. Genetic code would be realized at the level of dark nuclear physics and perhaps also at the level of ordinary nuclear physics and that biochemical representation would be only one particular higher level representation of the code.
For details see chapters Homeopathy in Many-Sheeted Space-time of "Bio-Systems as Conscious Holograms" and The Notion of Wave-Genome and DNA as Topological Quantum Computer of "Genes and Memes" |
20aaf9027832dcfc | Skip to main content
Chemistry LibreTexts
6.5: s-orbitals are Spherically Symmetric
• Page ID
• The hydrogen atom wavefunctions, \(\psi (r, \theta , \varphi )\), are called atomic orbitals. An atomic orbital is a function that describes one electron in an atom. The wavefunction with \(n = 1\), \(l\) = 0 is called the 1s orbital, and an electron that is described by this function is said to be “in” the ls orbital, i.e. have a 1s orbital state. The constraints on n, \(l\), and \(m_l\) that are imposed during the solution of the hydrogen atom Schrödinger equation explain why there is a single 1s orbital, why there are three 2p orbitals, five 3d orbitals, etc. We will see when we consider multi-electron atoms, these constraints explain the features of the Periodic Table. In other words, the Periodic Table is a manifestation of the Schrödinger model and the physical constraints imposed to obtain the solutions to the Schrödinger equation for the hydrogen atom.
Visualizing the variation of an electronic wavefunction with r, \(\theta\), and \(\varphi\) is important because the absolute square of the wavefunction depicts the charge distribution (electron probability density) in an atom or molecule. The charge distribution is central to chemistry because it is related to chemical reactivity. For example, an electron deficient part of one molecule is attracted to an electron rich region of another molecule, and such interactions play a major role in chemical interactions ranging from substitution and addition reactions to protein folding and the interaction of substrates with enzymes.
We can obtain an energy and one or more wavefunctions for every value of \(n\), the principal quantum number, by solving Schrödinger's equation for the hydrogen atom. A knowledge of the wavefunctions, or probability amplitudes \(\psi_n\), allows us to calculate the probability distributions for the electron in any given quantum level. When n = 1, the wave function and the derived probability function are independent of direction and depend only on the distance r between the electron and the nucleus. In Figure \(\PageIndex{1}\), we plot both \(\psi_1\) and \(P_1\) versus \(r\), showing the variation in these functions as the electron is moved further and further from the nucleus in any one direction. (These and all succeeding graphs are plotted in terms of the atomic unit of length, \(a_0 = 0.529 \times 10^{-8}\, cm\).)
Figure \(\PageIndex{1}\): The wave function and probability distribution as functions of \(r\) for the \(n = 1\) level of the H atom. The functions and the radius r are in atomic units in this and succeeding figures.
Two interpretations can again be given to the \(P_1\) curve. An experiment designed to detect the position of the electron with an uncertainty much less than the diameter of the atom itself (using light of short wavelength) will, if repeated a large number of times, result in Figure \(\PageIndex{1}\) for \(P_1\). That is, the electron will be detected close to the nucleus most frequently and the probability of observing it at some distance from the nucleus will decrease rapidly with increasing \(r\). The atom will be ionized in making each of these observations because the energy of the photons with a wavelength much less than 10-8 cm will be greater than \(K\), the amount of energy required to ionize the hydrogen atom. If light with a wavelength comparable to the diameter of the atom is employed in the experiment, then the electron will not be excited but our knowledge of its position will be correspondingly less precise. In these experiments, in which the electron's energy is not changed, the electron will appear to be "smeared out" and we may interpret \(P_1\) as giving the fraction of the total electronic charge to be found in every small volume element of space. (Recall that the addition of the value of Pn for every small volume element over all space adds up to unity, i.e., one electron and one electronic charge.)
Visualizing wavefunctions and charge distributions is challenging because it requires examining the behavior of a function of three variables in three-dimensional space. This visualization is made easier by considering the radial and angular parts separately, but plotting the radial and angular parts separately does not reveal the shape of an orbital very well. The shape can be revealed better in a probability density plot. To make such a three-dimensional plot, divide space up into small volume elements, calculate \(\psi ^* \psi \) at the center of each volume element, and then shade, stipple or color that volume element in proportion to the magnitude of \(\psi ^* \psi \).
We could also represent the distribution of negative charge in the hydrogen atom in the manner used previously for the electron confined to move on a plane (Figure \(\PageIndex{2}\)), by displaying the charge density in a plane by means of a contour map. Imagine a plane through the atom including the nucleus. The density is calculated at every point in this plane. All points having the same value for the electron density in this plane are joined by a contour line (Figure \(\PageIndex{2}\)). Since the electron density depends only on r, the distance from the nucleus, and not on the direction in space, the contours will be circular. A contour map is useful as it indicates the "shape" of the density distribution.
Figure \(\PageIndex{2}\): (a) A contour map of the electron density distribution in a plane containing the nucleus for the \(n = 1\) level of the H atom. The distance between adjacent contours is 1 au. The numbers on the left-hand side on each contour give the electron density in au. The numbers on the right-hand side give the fraction of the total electronic charge which lies within a sphere of that radius. Thus 99% of the single electronic charge of the H atom lies within a sphere of radius 4 au (or diameter = \(4.2 \times 10^{-8}\; cm\)). (b) This is a profile of the contour map along a line through the nucleus. It is, of course, the same as that given previously in Figure \(\PageIndex{1}\) for \(P_1\), but now plotted from the nucleus in both directions.
When the electron is in a definite energy level we shall refer to the \(P_n\) distributions as electron density distributions, since they describe the manner in which the total electronic charge is distributed in space. The electron density is expressed in terms of the number of electronic charges per unit volume of space, e-/V. The volume V is usually expressed in atomic units of length cubed, and one atomic unit of electron density is then e-/a03. To give an idea of the order of magnitude of an atomic density unit, 1 au of charge density e-/a03 = 6.7 electronic charges per cubic Ångstrom. That is, a cube with a length of \(0.52917 \times 10^{-8}\; cm\), if uniformly filled with an electronic charge density of 1 au, would contain 6.7 electronic charges.
For every value of the energy En, for the hydrogen atom, there is a degeneracy equal to \(n^2\). Therefore, for n = 1, there is but one atomic orbital and one electron density distribution. However, for n = 2, there are four different atomic orbitals and four different electron density distributions, all of which possess the same value for the energy, E2. Thus for all values of the principal quantum number n there are n2 different ways in which the electronic charge may be distributed in three-dimensional space and still possess the same value for the energy. For every value of the principal quantum number, one of the possible atomic orbitals is independent of direction and gives a spherical electron density distribution which can be represented by circular contours as has been exemplified above for the case of n = 1. The other atomic orbitals for a given value of n exhibit a directional dependence and predict density distributions which are not spherical but are concentrated in planes or along certain axes. The angular dependence of the atomic orbitals for the hydrogen atom and the shapes of the contours of the corresponding electron density distributions are intimately connected with the angular momentum possessed by the electron.
Methods for separately examining the radial portions of atomic orbitals provide useful information about the distribution of charge density within the orbitals. Graphs of the radial functions, \(R(r)\), for the 1s and 2s orbitals plotted in Figure \(\PageIndex{3}\). The 1s function in Figure \(\PageIndex{3; left}\) starts with a high positive value at the nucleus and exponentially decays to essentially zero after 5 Bohr radii. The high value at the nucleus may be surprising, but as we shall see later, the probability of finding an electron at the nucleus is vanishingly small.
Figure \(\PageIndex{3}\): Radial function, \(R(r)\), for the 1s and 2s orbitals. For an interactive graph click here.
Next notice how the radial function for the 2s orbital, Figure \(\PageIndex{3; right}\), goes to zero and becomes negative. This behavior reveals the presence of a radial node in the function. A radial node occurs when the radial function equals zero other than at \(r = 0\) or \(r = ∞\). Nodes and limiting behaviors of atomic orbital functions are both useful in identifying which orbital is being described by which wavefunction. For example, all of the s functions have non-zero wavefunction values at \(r = 0\).
Exercise \(\PageIndex{1}\)
Examine the mathematical forms of the radial wavefunctions. What feature in the functions causes some of them to go to zero at the origin while the s functions do not go to zero at the origin?
Exercise \(\PageIndex{2}\)
What mathematical feature of each of the radial functions controls the number of radial nodes?
Exercise \(\PageIndex{3}\): Radial Nodes
At what value of \(r\) does the 2s radial node occur?
Exercise \(\PageIndex{4}\)
Make a table that provides the energy, number of radial nodes, and the number of angular nodes and total number of nodes for each function with \(n = 1\), \(n=2\), and \(n=3\). Identify the relationship between the energy and the number of nodes. Identify the relationship between the number of radial nodes and the number of angular nodes.
Radial probability densities for the 1s and 2s atomic orbitals are plotted in Figure \(\PageIndex{4}\).
Figure \(\PageIndex{4}\): Radial densities (\(R (r) ^* R(r)\)) for the 1s and 2s orbitals.
Radial Distribution Functions
Rather than considering the amount of electronic charge in one particular small element of space, we may determine the total amount of charge lying within a thin spherical shell of space. Since the distribution is independent of direction, consider adding up all the charge density which lies within a volume of space bounded by an inner sphere of radius \(r\) and an outer concentric sphere with a radius only infinitesimally greater, say \(r + Dr\). The area of the inner sphere is \(4pr^2\) and the thickness of the shell is Dr. Thus the volume of the shell is \(4\pi r^2 \Delta r\) and the product of this volume and the charge density P1(r), which is the charge or number of electrons per unit volume, is therefore the total amount of electronic charge lying between the spheres of radius \(r\) and \(r + Dr\). The product \(4\pi r^2P_n\) is given a special name, the radial distribution function.
Volume Element for a Shell in Spherical Coordinates
The reader may wonder why the volume of the shell is not taken as:
\[ \dfrac{4}{3} \pi \left[ (r + \Delta r)^3 -r^3 \right]\]
the difference in volume between two concentric spheres. When this expression for the volume is expanded, we obtain
\[\dfrac{4}{3} \pi \left(3r^2 \Delta r + 3r \Delta r^2 + \Delta r^3\right)\]
and for very small values of \(\Delta r\) the \(3r \Delta r^2\) and \(\Delta r^3\) terms are negligible in comparison with \(3r^2\Delta r\). Thus for small values of \(\Delta r\), the two expressions for the volume of the shell approach one another in value and when \(\Delta r\) represents an infinitesimal small increment in \(r\) they are identical.
The volume element of a box in spherical coordinates. Image used with permission (CC BY; OpenStax).
The radial distribution function is plotted in Figure \(\PageIndex{5}\) for the ground state of the hydrogen atom.
Figure \(\PageIndex{5}\): The radial distribution function for an H atom. The value of this function at some value of r when multiplied by \(\delta r\) gives the number of electronic charges within the thin shell of space lying between spheres of radius \(r\) and \(r + \delta r\).
The curve passes through zero at \(r = 0\) since the surface area of a sphere of zero radius is zero. As the radius of the sphere is increased, the volume of space defined by \(4 \pi r^2Dr\) increases. However, as shown in Figure \(\PageIndex{4}\), the absolute value of the electron density at a given point decreases with \(r\) and the resulting curve must pass through a maximum. This maximum occurs at \(r_{max} = a_0\). Thus more of the electronic charge is present at a distance \(a_o\), out from the nucleus than at any other value of \(r\). Since the curve is unsymmetrical, the average value of \(r\), denoted by \(\bar{r}\), is not equal to \(r_{max}\). The average value of \(r\) is indicated on the figure by a dashed line. A "picture" of the electron density distribution for the electron in the \(n = 1\) level of the hydrogen atom would be a spherical ball of charge, dense around the nucleus and becoming increasingly diffuse as the value of \(r\) is increased.
The radial distribution function gives the probability density for an electron to be found anywhere on the surface of a sphere located a distance \(r\) from the proton. Since the area of a spherical surface is \(4 \pi r^2\), the radial distribution function is given by \(4 \pi r^2 R(r) ^* R(r)\).
Radial distribution functions are shown in Figure \(\PageIndex{6}\). At small values of \(r\), the radial distribution function is low because the small surface area for small radii modulates the high value of the radial probability density function near the nucleus. As we increase \(r\), the surface area associated with a given value of \(r\) increases, and the \(r^2\) term causes the radial distribution function to increase even though the radial probability density is beginning to decrease. At large values of \(r\), the exponential decay of the radial function outweighs the increase caused by the \(r^2\) term and the radial distribution function decreases.
Figure \(\PageIndex{6}\): The radial distribution function (\(4 \pi r^2 R (r) ^* R(r)\)) for the 1s and 2s orbitals. Compare to the radial functions in Figure \(\PageIndex{3}\) or the radial densities in Figure \(\PageIndex{4}\). For an interactive graph click here.
Example \(\PageIndex{1}\):
Calculate the probability of finding a 1s hydrogen electron being found within distance \(2a_o\) from the nucleus.
Note the wavefunction of hydrogen 1s orbital which is
\[ψ_{100}= \dfrac{1}{\sqrt{π}} \left(\dfrac{1}{a_0}\right)^{3/2} e^{-\rho} \nonumber\]
with \(\rho=\dfrac{r}{a_0} \).
The probability of finding the electron within \(2a_0\) distance from the nucleus will be:
\[prob= \underbrace{\int_{0}^{\pi} \sin \theta \, d\theta}_{over\, \theta} \, \overbrace{ \dfrac{1}{\pi a_0^3} \int_{0}^{2a_0} r^2 e^{-2r/a_0} dr}^{over\, r} \, \underbrace{ \int_{0}^{2\pi} d\phi }_{over\, \phi } \nonumber\]
Since \(\int_0^{\pi} \sin \theta d\theta=2\) and \( \int_0^{2\pi} d\phi=2\pi\), we have
\[ \begin{align*} prob &=2 \times 2\pi \times \dfrac{1}{\pi a_0^3} \int_0^2a_0 (-a_0/2)r^2 d e^{-2r/a_0} \\[4pt]&=\dfrac{4}{a_0^3}\left(-\dfrac{a_0}{2}\right) (r^2 e^{-2r/a_0} |_0^{2a_0} - \int_0^{2a_0} 2r e^{-2r/a_0} dr) \\[4pt]&= -\dfrac{2}{a_0^2} [(2a_0)^2 e^{-4}-0-2\int_0^{2a_0} r \left(-\dfrac{a_0}{2}\right) d e^{-2r/a_0} ] \\[4pt]&=-\dfrac{2}{a_0^2}4a_0^2 e^{-4} +\dfrac{4}{a_0^2}(-\dfrac{a_0}{2}) (r e^{-2r/a_0} |_0^{2a_0}-\int_0^{2a_0} e^{-2r/a_0} dr ) \\[4pt]&=-8e^{-4}-\dfrac{2}{a_0} \left[2a_0e^{-4}-0-(-\dfrac{a_0}{2})e^{-2r/a_0} |_0^{2a_0} \right] \\[4pt]&=-8e^{-4}-4e^{-4}-e^{2r/a_0} |_0^{2a_0} \\[4pt]&=-12 e^{-4}-(e^{-4}-1)=1-13e^{-4}=0.762 \end{align*}\]
There is a 76.2% probability that the electrons will be within \(2a_o\) of the nucleus in the 1s eigenstate.
This completes the description of the most stable state of the hydrogen atom, the state for which \(n = 1\). Before proceeding with a discussion of the excited states of the hydrogen atom we must introduce a new term. When the energy of the electron is increased to another of the allowed values, corresponding to a new value for \(n\), \(y_n\) and \(P_n\) change as well. The wavefunctions \(y_n\) for the hydrogen atom are given a special name, atomic orbitals, because they play such an important role in all of our future discussions of the electronic structure of atoms. In general the word orbital is the name given to a wavefunction which determines the motion of a single electron. If the one-electron wave function is for an atomic system, it is called an atomic orbital.
Do not confuse the word orbital with the classical word and notion of an orbit. First, an orbit implies the knowledge of a definite trajectory or path for a particle through space which in itself is not possible for an electron. Secondly, an orbital, like the wave function, has no physical reality but is a mathematical function which when squared gives the physically measurable electron density distribution. |
9ab98a66538d6b90 | Skip to content
The Secular Man’s Suicide Pact
I recently visited a place with a much larger Moslem population than my home town. That set me to thinking about things.
Leftist progress on diversity looks to be moving right along. I’m guessing they have a schedule for undoing the dispersion of Babel. We’ll see how that works out.
Trembling as I am to utter the following blasphemy against one of the Secular Man’s highest articles of faith, it appears to me that segregation is pretty much a natural thing for most people. If we don’t segregate by race, we do it by sex, religion, income, the type of work you do, your IQ, or even your status as a manager or worker bee. I’ve noted that the Ivy League humanities majors don’t pal around with the NASCAR set a whole lot, not that either side of this travesty of segregation is complaining about it, because each side is equally convinced that the other side is peopled with bigots and idiots. In fact, NASCAR people won’t even hang around NHRA. Oh, well.
So people prefer to be around folks who are like themselves. But somehow, acknowledging this obvious fact makes me a blasphemer in the eyes of the Secular Man. Like Winston said in 1984, “theyll shoot me i don’t care theyll shoot me in the back of the neck i dont care”.
There are exceptions to the general trend toward segregation, sometimes benign, sometimes not. There are cases where we let students come over here and send ours over there. No harm there. If you want a kid from France in your home for the school year, more power to you, sez I.
And there are Christian societies who send doctors, farmers, engineers and whatnot. They have an ulterior motive, of course, but it’s an open secret. They’ll treat lepers and show folks how to have safe drinking water in exchange for a chance to explain about Jesus and the cross. There is definitely no harm in that. And — speaking only of my own premillennial views — Christianity decidedly does not teach us to take over the world by force. If there is to be any force, Jesus will impose it in person when He arrives. Post-mill folks, I’ll let you speak for yourselves on this.
Other cases are not quite so benign. Caesar desegregated the Italians and the Gauls, but not to help the latter. Likewise the Assyrians in Israel, the Babylonians in Judah, and so on.
In general, I’d say anybody who thinks it’s the destiny of his group to take over the world by force is a threat. In modern times, the Communists and Fascists fit this category and were proud of it. There was a time when Americans understood this and reacted against it. The old adage about being better dead than red was a way of acknowledging the open threat posed by Communism while saying that we intended to push back with whatever force it took.
And to get back where I started from, Moslems fit this category. Islam intends to take over the world, by persuasion where it can, by force where it must.
In the vast majority of cases, your Moslem co-worker is no threat to you or anybody else. He’s just a guy trying to get by, raise his kids to be good Moslems, and keep his wife from feeling like a conspicuous fool wearing her burqa.
The problem arises when there are enough Moslems to form a society that runs along Islamic lines. Because Islam expects to own Earth and everyone on it. The Secular Man, wearing his feelings on his “coexist” bumper sticker, is just not prepared to deal with the reality of an Islam that will not rest until everyone bows to Mecca. The Secular Man’s refusal to see a threat where there clearly is one looks a lot like a suicide pact.
Will the real oligarchy please stand up
The Washington Times published an article quoting an Ivy League study saying we live in an oligarchy rather than a republic. Elites and special interests buy influence and get their way, he says, describing a government of plutocrats more than oligarchs. The Catholic News Service quotes the same study to the same effect, complaining against big corporations doing a lot of evil influence buying. The gist of it is that the rich get their way, harming the rest of us.
But the biggest source of influence over the way the government governs is the government itself.
And here’s how it works. Something like 148 million Americans are receiving some form of stipend from the government. Government is essentially buying their votes with money confiscated from the 90 million private sector workers who pay for it all. So maybe it’s true that we have an oligarchy or plutocracy settling in upon us, but the “evil” corporations are by no means the dominant players in this field. The money laundering from the government dwarfs all other forms of paid influence, whether it’s from the Koch brothers on the right or Warren Buffet and George Soros on the left.
It’s a victory for Big Irony that many of the people complaining the loudest about the influence of Big Corporations are actually part of Big Government and seem completely blind to their role in the most pervasive and ruinous corruption scheme of all.
So why are you still a Christian?
Given the general drift of western civilization, there are some on our side who are feeling like Christianity is, well, in retreat. If you could wind your time turner back 40 years, nobody living at that time would have said that America would be in the process of honoring sodomy with its own special rite of marriage. Nobody would have predicted rates of divorce and illegitimacy where they are today. Jokes about mass confusion on Fathers’ Day used to be directed at other people, not us. Now we are the joke.
Struggling families can’t find a reason to stick it out. People give up and give over to their pet sins. People born, raised, and married in the church suddenly go secular, not out of any sense of offense or hostility, but because they just don’t care any more.
So why are you still plodding along with a crowd that seems to be losing so badly? Maybe it’s because you’re part of the gray-haired set that still does all that Churchianity stuff (including Wednesday night). Maybe it’s because America isn’t the only place in the world, and in some other places like China, Christianity is growing like mad. Maybe you stick around because you’re one of those fortunate folks still plugged into a dynamite church, and you really enjoy it.
Here’s a reason for you to ponder: You should keep the faith because Jesus is alive. The only real reason to get into Christianity in the first place, if I could say it like that, is because it is true. And the central truth of Christianity is that He rose from the dead. If He rose from the dead, there is every reason in the world to continue faithful regardless of what the rest of society does.
If Jesus didn’t rise from the dead, there never was a reason to be a Christian. In the early days of Christianity, Paul said that if Christ has not risen, then “we of all men are most miserable.” Why suffer for a dead god? What sense does that make? We don’t suffer any serious persecution in America, so let’s apply the thought more accurately to us: why deny yourself the pleasures of a hedonistic life to honor the memory of a dead guy?
But if Jesus is alive, that changes everything. That would mean that He has power over death. It would mean He really is the Son of God. It would mean His church is destined to become the central focus of history. It would mean that following Jesus matters, not just to your kids or your person sense of stick-tuitiveness, or the moral tidiness of your little corner of the world, but it matters on the biggest and most cosmic scope imaginable.
And if Jesus is alive, it would mean that death is not the end of life that we all thought it was, but just a pause before we transition into something far greater He has prepared for us. It would mean that our ultimate conclusions about life stand upon a living hope. And hope is the thing that makes us keep on keeping on.
So I’m still a Christian because the tomb was emptied when Christ came out of it alive.
Ukraine and Obama’s complicated failure
Of course you’ve heard by now that Mr. Romney and Mrs. Palin both warned that Mr. Obama’s policy toward Russia would lead to the ongoing crisis in Ukraine. America should have been doing everything possible to help Ukraine establish a sturdy, free economy, a justice system free of graft and corruption, and a credible military. If we were attempting any of these things, it never made any news I could see.
But now Obama’s failures are starting to earn compound interest. Obama, who years ago helped bring about the decline of America’s manned space program — not that he did this by himself; he had plenty of help — now has another problem on his hands in the matter of Ukraine.
He can’t afford to anger the Russians too much because we depend on them to get our astronauts and supplies to/from the space station. We’re in one of those moments when you realize that great nations have to remain strong in every area. America’s space program has been the envy of the world for 60 years, that is, until the last shuttle flight. Now we have no way to get people and cargo to the space station.
So Mr. Obama will be low key in his reactions to the Russian dealings in Ukraine. It would be too embarrassing to have the Russians tell America to kiss off next time we need one of our astronauts to bum a ride in a Soyuz spacecraft.
And I feel I need to add something here. I am not ashamed of America, but I am ashamed of our self-imposed weakness and immorality after years of secular, Socialist-leaning misrule. The looming problems confronting our astronauts are the kinds of weird, unique gotchas that crop up when foreign policy is dominated by the wishful, utopian thinking of liberal academics instead of a hard-headed determination to deal with facts as they are. Russia is a powerful nation that sees itself as our rival. Mr. Putin is a smart, tough ex-KGB agent and a fierce nationalist. He plays to win and won’t hesitate to spill blood to achieve his goals. Only blind folly could have failed to see that and act accordingly. Putting our space program into a state of dependency on the Russian program is beyond naïve.
So remember that next time a liberal politician tells you America is disliked around the world, embarks on a worldwide apologize-for-America tour, and offers a former KGB agent one of those ludicrous red reset buttons.
Why I think there’s a God
Louise Antony gave her reasons for thinking there’s no God, and I dealt with those here. But what are the reasons for thinking there is one? In most Christian literature, these boil down to five.
Why There’s Something Instead of Nothing
The physical universe tells us it had a beginning. The sun isn’t merely shining; it is burning up. The world isn’t merely turning; it is spinning down to a stop. Natural processes everywhere are in a state of decay and decline. The available energy in the universe is a consumable resource. There’s an end point when the mainspring of the whole cosmos will stop ticking. Therefore, it had to have been wound up at some point in the past. So the physical universe isn’t eternal.
So something else must have been here before there was a universe. Whatever that was, it must have been eternal and must have had the capacity to bring the universe into being.
Why There’s Order Instead of Chaos
When you drive through the South and see a 1000-acre tract of pine trees planted in rows, equally spaced along the row and all the same age, you don’t have to ask if somebody did that. When I look at the far more complex arrangements of DNA, it’s obvious that a mighty intelligence made this. DNA contains the coding needed to duplicate itself. But a process capable of creating a DNA molecule from scratch simply does not exist in nature. Nothing even remotely approaching this degree of sophistication has ever been observed, not in nature, nor even in man’s most advanced laboratories.
So something eternal and powerful was there before the universe existed. And it had the capacity to bring the universe into being, wind up the spring, and then release the energy through myriads of the most intricately designed mechanisms. Such a being is intelligent beyond all the reckoning of man.
Why Things are Right and Wrong
People have a moral component to their nature. Ms. Antony shows this when she asks that we all work for peace. Nice thought, though I wish she’d explain why, on atheistic principles, peace is better than war. After all, isn’t evolution driven by conflict and winnowing away the unfit so that only the strongest and smartest survive to breed again? Here’s a case where evolutionists are better than their principles. They generally wish the world were better — and “better” is defined in moral terms.
Furthermore, there is, for lack of a better term, a genuine reality underlying morals. We aren’t merely displeased when brutes kidnap little girls and sell them into sexual slavery. No, this is really and truly evil, and wrong. And it’s not just that we feel happy about a man who would redeem little slaves out of their bondage. No, such a deed is really and truly good and right.
The fact that morality cannot be derived from nature is not an argument from gaps in our knowledge. Rather, it’s plain to see that there is no arrangement of particles and forces that can ever account for a moral right and wrong because morality involves not just an assessment of facts, but an assertion of authority. Morality is the claim, coming from outside your own head, that you ought or ought not do something. And “ought” inherently arrives in the form of a command. Morality sees what is wrong and authoritatively forbids it. Morality sees what is right and authoritatively commands it.
The origin of morality, then, is very much like the origin of the physical universe. It’s here; it’s real, and it defies natural, material explanation. It demands a source that is outside of this world, transcendent, and that was capable of implanting it in the human heart when man was first formed.
So — just building the argument — something eternal brought the universe into being, something that was powerful enough to do it, intelligent enough to design it, and this Being possessed a moral code which it then hard wired into the hearts of men.
Why We Sense the Transcendent
It’s an interesting question as to why, on naturalist/materialist principles, people should have ever evolved to be capable of wondering about what could be outside this physical dimension. Where’s the survival value in such a massive and stressful distraction? Or to take the question a level deeper, how do matter and energy interact in such a way as to produce conscious beings who ponder things higher than matter and energy?
Ms. Antony herself experiences the draw of the transcendent but drops it too soon. The real question is what a sense of transcendence is leading you to. Being a Christian, it’s obviously my opinion that God created this in us to lead us to Him. Paul told the Athenians that we “feel after Him,” (Acts 17:27) clearly expecting that even pagan men would have been open minded enough to investigate an intuition shared by virtually all people.
We Christians find our sense of transcendence filled, satisfied, yet heightened and completed by knowing our God through His Son, Jesus. People from other religions testify of their version of the same sense of transcendence. It’s not my purpose to address those experiences, only to say that whether we’re making out shapes in a fog or seeing in the full light of day, something is there, and we all sense it to some degree. And although the argument is not dispositive, I can’t frame a better explanation for a sense of transcendence than to propose that God has indeed set “eternity in our hearts” (Eccl 3:11) as a way to both prompt us to seek Him and as a way to experience Him once He is found.
The life of Jesus Christ
The chief way God chose to reveal Himself to man was through Jesus. The officers sent to arrest Him said, “Nobody ever spoke like this man.” We exhaust all the superlatives when we consider Him. His teachings set the standard for goodness even among those who reject Him. He led such a life that those who sought His ruin could accuse Him only by lying. Without money, without armies, without political connections, without allies, without any access to the levers of power, having died young, Jesus did more to change the world for good than all who ever came before or after.
And He rose from the dead. Yes, His followers reported many other miracles He did, turning water to wine, walking on water, feeding multitudes out of a sack lunch.
But the miracle of His resurrection was the story they were all, to man, willing to be tortured and die for the privilege of telling it, not because they had anything to gain by it, but because they undeniably believed it to be true. If there is a God such as I have described, and if God became a man, I would expect Him to be a man like Jesus.
So that’s it. It’s why I think God exists and has revealed Himself to us through His Son, Jesus.
Answers for an atheist
The New York Times published an interview with atheist Louise Antony who confidently affirms that there is no God. Read the linked article if you like, but her arguments against God boil down to a just handful of things.
First, Antony says, “I deny that there are beings or phenomena outside the scope of natural law.” This, of course, is no argument at all. It’s just assuming the conclusion. Presupposing materialism merely evades the debate about whether God exists. The Christian idea of God is that He is transcendent, meaning that He is “above” or “beyond” or “outside” the universe. Looking for God by material methods is like prospecting for diamonds with a metal detector. Wrong tool.
In her second argument, Antony says religious people can’t all agree on what God is, what He is like, or whether there are more gods than one. This is all true, and all irrelevant. For the sake of argument, let’s assume that all religious people are hopelessly muddled on the nature of God. Does this mean they’re all equally deceived on the existence of God? Not at all. Even in a total fog, people can know something is out there without knowing any details about it.
Antony then says she cannot reconcile the existence of evil with the existence of God. Beg pardon, but what is this “evil” she speaks of? The existence of categories like “good” and “evil” assumes a Supreme Authority who establishes what’s good and what’s not. And consider again Antony’s statement, “I deny that there are beings or phenomena outside the scope of natural law.” Yet the very categories of good and evil are outside of natural law. You cannot derive morality from Newton’s laws or the Schrödinger equation. That requires a transcendent source.
On the other hand, if good and evil are not real categories, if they’re just cultural norms or her own private intuitions, then her objection vanishes. Her argument amounts to, “I’m displeased (or we are); therefore, there is no god,” which is absurd.
But Ms. Antony is left to ponder the motions of her own heart. Why is she outraged by rape or brutality? Who cares, and why should anyone care, if orphans starve, tyrants strut, armed gangs pillage and plunder, girls are bought and sold, and all the rest of human misery is played out before our eyes? If Ms. Antony knows anything at all, she knows there’s Something Big moving out there in the fog.
And following that, Ms. Antony should be the first to accept religious experiences. After all, she’s had a big one. She’s felt the wrong of this fallen, sinful world and felt the need to put it all back right. That didn’t evolve from a big cloud of hydrogen gas. God has set eternity in our hearts, and that’s what it sounds like when people pay attention to it, even a little bit.
Kim Jung O
So now the FCC wants to install government minders in newsrooms across the country to make sure “underserved minorities” get the news they need. I guess we’ll show Kim Jung Un how it’s done. Even Mr. Obama’s lickspittle media has an eyebrow aloft. But don’t worry, lefties — if you like your freedom of the press, you can keep it!
Another foretaste of things to come
It’s no secret that Christian values are being slowly but inexorably dispossessed in America. Wedding cake bakers who refuse service to homosexual couples get sued over it, and lose. They’re told that once you open your business up to serve the public, then you have to serve whatever comes through the door.
But now a bar owner in California says he’ll refuse service to state legislators who vote for anti-gay legislation. Actually, he went a bit farther and said he’d deny them entry to his bar.
I’m thinking his valiant pro-gay stand isn’t likely to cost him a lot of money. How many Christians are clamoring to enter a gay bar in California?
Still, the principle being established here should tell every Christian that it’s past time to gird up the old loins. Christian bakers are fair game for discrimination suits if they transgress against the Secular Man’s homodoxy on the grounds that public businesses have to accept whatever the public accepts.
To borrow from Spurgeon, I’ll adventure to prophesy that anti-Christian bar owners will be immune from suits on the same grounds. Yet — lest we all forget — Californians voted against homosexual marriage, even going so far as to forbid it in their state constitution. So it’s clear that the actual public in California accepts anti-gay legislators just fine. But you can be certain that the bar owner, should he get sued for discrimination, will get a pass.
Christians should be waking up to the fact that we’re in a fight. And to paraphrase Mordecai to Esther, don’t think this won’t ever touch you.
When politics go bad
King Baasha of Israel was a drunkard. His servant Zimri murdered him while he was drunk. Short moral of story: A drunken king can’t be trusted to know who the enemy is.
Zimri took over and reigned for about a week. Another servant named Omri found out Baasha was dead and came after Zimri. Zimri neither fought nor fled, but went into his own house and burned it down upon himself. Moral: It’s easier to take over than it is to actually keep order, and once order is lost, you don’t have a lot of options.
Omri was a wicked king and plunged Israel deeper into ruinous idolatry. Moral: A guy who just wants to be in charge is about the last man you want in power.
Omri’s son Ahab eventually became king. The Bible describes Ahab as worse than all who came before him. He married Jezebel who was even worse than he was. Moral: Getting rid of drunks, killers, and tyrants doesn’t mean things are about to get better. The son might make you wish for his daddy back. And beware the tyrant’s wife.
During Ahab’s reign the prophet Elijah called for a drought that lasted for years. Moral: When the right leadership arrives, the fight isn’t over; it’s just starting, and you may dislike his methods.
At Mount Carmel, God spoke by fire from heaven. Israel, convinced, repented. They acknowledged that the Lord is God, not Baal, and executed the idolatrous priests. Then the rain came. Moral: Fixing a country starts with fixing hearts.
Baghdad Bob and ObamaCare
Kathleen Sebelius, the Baghdad Bob of ObamaCare, says job losses due to the idiotic tax scheme are a “popular myth.”
Bad timing for her announcement, though, coming right after the administration has been cheesecake grinning and doing happy hands over what a great American blessing it is to escape “job lock” by getting fired.
We should all savor this rare moment of unanimity in the political world with both Republicans and Democrats saying that ObamaCare is a failure. The GOP says it’s a failure because (among other things) it makes people lose their jobs. The Democrats say it’s a failure because ObamaCare job losses are only mythical, leaving millions of hapless citizens still locked in a job.
Is this a great country or what?
Secular Man’s smoking habits
Has anyone else noticed how smoking tobacco has been getting less legal while smoking marijuana has been getting more legal?
And isn’t it just the funniest thing that so many plain old potheads are claiming it’s for medicinal purposes?
Connecticut — nekkid and hoping you won’t notice
One of the great insights of the American Revolution is that a government’s authority derives from the consent of the governed.
The State of Connecticut passed a law saying everyone in the state must register so-called “assault” weapons and high capacity ammo magazines. Comes now the report that ten thousands of citizens in Connecticut — perhaps millions — have declined to obey. Registration schemes are plainly the first move in a game of confiscation. Many who intend not to surrender their arms are declining to register them.
This may turn out to be a very, very big deal. Failure to register a weapon in Connecticut is a class D felony. A class D felony is punishable by up to five years in prison. Despite that, gun owners in Connecticut collectively jutted their jaws and said, “Hell, no.”
How big is the problem? Connecticut estimates there are about 370,000 so-called “assault” weapons in Connecticut. Less than 50,000 have been registered. They estimate there are 2.4 million high capacity ammo magazines in the state. About 38,000 have been registered. Theoretically, Connecticut now has well over two million new felons.
You can be sure Connecticut pols see the problem just like I do. If a huge swath of the population responds with sullen defiance, the government no longer has the consent of the governed. How is it a legitimate government any more? And how do you recover that once it’s lost?
I see three options. 1) Connecticut can openly and humbly restore its legitimacy by repealing the law. 2) Officials can reduce enforcement to some low level that ruins a few people’s lives while leaving most violators untouched yet still under state threat. 3) The state can hire more SWAT teams, build way more prisons and start the crackdown.
Option 2 is most likely because the gun law was designed not to solve a problem but to make liberals feel good about themselves. Neither practicing humility nor engorging the prisons would serve that purpose, although criminalizing a bunch of rightwingers would. And if a few of them get busted, well, that’s the price one pays.
One problem: Reducing enforcement to a level that prevents serious conflict is claiming victory while hoisting a white flag. It’s like one of those dreams where you show up at work buck nekkid and nobody notices.
As Drudge says, “Developing…”
From creation clearly seen
The recent creation/evolution debate between Ken Ham and Bill Nye was pretty good. It was not excellent. The rules of the debate didn’t require the contestants to engage one another to any great extent, so the back-and-forth that challenges reasoning didn’t happen.
One of the things Mr. Ham said that begged for discussion was his remark that just doing science presupposes God and creation. Christians schooled in apologetics promptly said rah-rah, but the argument was left as a mere assertion. Nye declined to ask for explanation, and Ham obliged.
Why should there be any such thing as natural law? Why should nature be orderly and predictable? Why should gravitation behave according to a rule so precise that you can measure its effects and write a mathematical equation that tells you exactly what’s going to happen? A Christian would argue from the creation account that God intended His universe to function in an orderly way. Creatures bring forth “after their kind,” it says ten times. The motions of the earth, sun, moon, and stars provide day, night, signs, and seasons. There is order in this, and Paul tells us that the invisible things of God are clearly seen, being understood by what God made. (Rom 1:20)
But the deeper question for Mr. Nye would have been this: What is it about your thought process that leads you to look for orderliness in the first place, and why does your mind naturally recognize it and latch onto it?
Based on Nye’s frequent and brave admissions about what he doesn’t know, I can only surmise that he’d admit again that he has no idea why the Bing Bang resulted in law and order rather than sheer chaos, and he’d likely admit that he has no idea why his mind should be structured to look for order. Or he might just say it evolved this way, which is the same thing.
But the Christian can say that if we take the Word of God as our starting point, the first thing we learn is that God is, that He made the universe, and that He did it in an orderly manner. Further, God immediately set about revealing Himself to man with that revelation being set in a framework of reason and logic. The imago dei means our heads are hard-wired to look for order, to recognize it at once, and to latch onto it when it’s found.
For science to exist at all, all these Christian teachings about creation and human nature have to be assumed as prerequisites. They must be presupposed.
The questions for Mr. Nye and everyone who investigates science from a naturalistic viewpoint are these: How does the Big Bang account for the fact that the resulting cosmos functions according to fixed laws? And second, how did the mind of man come to look for such things? Christianity has an answer for these questions. Naturalism can’t do any better than offer a shrug and say that’s just the way things are — which is the opposite of true science.
Conservatives who can’t connect the dots
A few months ago while the electioneering was in full-throated roar, a “conservative” writer lamented that liberal voters seem unable to connect the dots. One quoted a low-info voter who expressed unconcern about a property tax hike because, said the voter, “I rent an apartment, so property taxes don’t affect me.” How do you connect intelligently with people this thick?
And then today, I was listening to talk radio “conservative” Mark Larsen explaining to a caller that he ‘d have no problem with the Boy Scouts changing their stance on homosexuality to go with the PeeCee flow and start accepting it. The caller wondered why the institution must change to accommodate the individual rather than the other way around, noting that the Boy Scouts have always required young men to be morally straight.
“What is morality?” wondered the blind Mr. Larsen aloud. After all, Christian denominations have differed over this or that detail. And whatever would we say to the Metropolitan churches who are openly homosexual? (Tacit premise in the question: Until you get everything perfect, you’re not allowed to say they’re wrong.)
This is a conservative, low-info talk show host who cannot connect the dots. Well, actually, Larsen says he’s libertarian, but he’s still dense on this topic and unable to connect dots, and here’s why.
Morality of any and every sort is an assertion of authority. The moment you say “ought” or “ought not,” somebody else demands, “Says who?” Morality requires an anchor. The Author, the Anchor, is God. And even though the church admittedly has quibbles a-plenty, we’re all together in relaying to you His judgment that sodomy isn’t okay.
Mr. Larsen, apparently unwilling to consider a reliable message from a capable though fallible messenger, has no anchor. How else can you even ask such a question as, “What is morality?”
And once you pull up the anchor, everything tied to it will drift away. The current debate over homosexuality didn’t spring upon America like a bolt from the sky. It started way back when Americans grew discontented with the God who insists we should keep our word. Not long thence, easy-breezy divorce became socially acceptable. A few years later, pornography began to proliferate. And then came the sexual revolution with its promiscuity, the shack-ups, the meteoric rise in illegitimacy, the loss of shame as the entertainer class breeds without commitment.
First thing you know, many major cities had whole sections of their towns devoted to sodomy, and before you can adjust to that, they’ve got us voting on whether homosexuals have a right to marry one another.
And at that point, people like Mr. Larsen cannot render a reason as to what could possibly be bad about that.
Prediction: Sometime soon our society will be debating polygamy, pedophilia, bestiality, and necrophilia, and those who (for whatever reason) disapprove of such things but who have no anchor will find themselves as tongue tied as the hapless Mr. Larsen was. Who’s to say what’s wrong, after all?
Without God as the anchor for morals, you will have no morals. He made the world where it can’t be any other way. And yes, He did that on purpose. Morals, like the rights stated in the Declaration of Independence, are derived. And just as God created us equal and endowed us with rights, so he also created us with the social, civic, and religious obligations we refer to as morality.
When you pull up the anchor, you don’t just lose your morals. You’ll start losing your rights, too. Same anchor, same God. Say good-bye to life, liberty, and the pursuit of happiness. Godless men cannot comprehend, let alone respect, the Bill of Rights. They have no clue where such things came from, no idea of what makes them special, and no sense of a higher Authority to whom all earthly authorities must give account. You can no more have rights without morality than you can have a stream without water. Both flow from the same spring, the Eternal God.
God deliver us from leaders who do not know their Maker, or even that they are made.
Lance and Oprah
The embarrassing spectacle of Lance Armstrong confessing to Oprah has failed to capture the popular imagination. For one thing, Lance is not a sympathetic character. Americans are not prone to soaring eloquence, so people call him a jerk. British writer Geoffrey Wheatcroft said of him, “Mr. Armstrong has “a voice like ice cubes,” as one French journalist puts it, and I have to admit that he reminds me of what Daniel O’Connell said about Sir Robert Peel: He has a smile like moonlight playing on a gravestone.”
Another thing is that Lance’s confession came too late. And it was lame. And it was tacky. But it fits the pattern now so familiar in no-fault America in which a famous person commits a sin, gets caught, lies about it till the lie becomes ridiculous, then finally stages a theatrical confession. The staging is usually in proportion to the fame and ego of the perpetrator. Thus, Lance. Scroll through the mental list of publicly groveling miscreants from Lance back through Anthony Weiner, Bill Clinton, South Carolina governor Mark Sanford, gay/doping preacher Ted Haggard, and a host of others.
The spiritual man can see what this is all about. Adam remains banished from Eden. The occasional rite of public humiliation is just a couple of the exiles passing by the gate and wishing for a way back in. But the gate is shut. The cherub with the flaming sword still bars the road to paradise.
A final thing about Lance’s confession is that we can all see it does no good. The public, momentarily curious, watches the ritual confessions and is vaguely aware of the hopelessness of it all. To confess seems required. A wrong was done. To admit it is demanded. We all feel the pressure of the demand. Some of us help exert it. At the same time, it’s inadequate. It’s watering a dead tree, and all the same to the tree whether it’s water or tears.
The Secular Man, two-dimensional being that he is, confesses to himself and to his peers. Who else is there? To the carnal mind, what is paradise but the pleasure he felt before his sin was found out? A degrading confession seems to be how you shake up the Etch-A-Sketch and redraw the picture.
The confession has to feature humiliation and suffering. Part of the suffering involves the rest of us smirking at the poor dumb schmuck locked in the pillory. But even when we humiliate ourselves as Lance did, the sin remains. And even if you suffer to the point of death, you’re just dead and guilty. Whether you’re confessing to Oprah or CNN, it’s still just praying to a god that cannot save. (Isa 45:20)
The riddle is solved at the cross. It is Christ’s humiliation, not ours, and His suffering and death, that brings remission of sins. It is our confession to Him, not to Oprah nor to a public filled with critics and voyeurs, that brings peace. |
a3005ccf27d39f76 | Centre for Theoretical and Mathematical Physics
QCD theory at high energies and nonzero temperatures
Quantum chromodynamics (QCD) is a core ingredient of the Standard Model of Particle Physics, with its relatively short list of elementary constituents of matter and force carriers, all wrapped in an short elegant mathematical framework.
QCD, also referred to as the theory of the strong force, describes the force that binds quarks into their known bound states, the objects that constitute all directly observable matter, predominantly protons and neutrons. Its force carrier, the gluon is the preeminent channel for particle production in modern high energy collider experiments such as the LHC at CERN.
QCD is a complicated theory with many emergent features, faces that surface in ways strongly depending on the expermental circumstances: unlike gravity or electromagnetism, the force becomes weaker at short distances (or if probed at high resolutions), a phenomenon know as asymptotic freedom. This is naturally complemented by another feature not shared by the other forces in the standard model: confinement of quarks (and gluons): one can never observe an isolated quark (or gluon) since the force between any two (or more) of them does not fall of with distance. Instead, the energy required to pull quark away from a bound state keeps increasing with distance to the remnants and will eventually reach the threshold for the production of a new quark-antiquark pair, resulting in a new pair of bound bound states, not an isolated quark and a “remnant”. The only option to see a macroscopic state of quasi free quarks and gluon is a situation in which, in a large enough volume, the temperature and density is large enough to sustain free movement over volumes larger that the proton size, i.e. by driving the system beyond a phase transition into a phase known as the Quark Gluon Plasma (QGP). If we go far enough back in the evolution of the universe to about a microsecond after the big bang, this is, in fact, the state of matter permeating the whole universe.
Left: History and size of the universe from the big bang to today. Several phase transitions occur before matter and light decouple and the universe becomes observable by astronomical means after the cosmic microwave background is created. Right: Phase transitions in the early universe can be triggered by high temperatures and densities. Modern collider experiments explore these conditions for the first time since the big bang. (The universe followed a trajectory from top to bottom, very near the temperature axis through the region probed by LHC) (Sources: left: Particle Data Group, right: BNL, STAR collaboration)
Modern collider experiments like the LHC at CERN or RHIC at BNL are able to recreate this state of matter (each with slightly different parameters, see Fig. 4) and tell us about a state of matter dominating an era of cosmic evolution hidden behind the cosmic microwave background (the earliest cosmological feature observable by astronomical means). To recreate this state experimentally, RHIC and the LHC, create temperatures in excess of a 100 000 times the temperatures at the center of the sun: This truly are the hottest spots in our universe today – no matter how short lived and how tiny. The nature of these experiments is by necessity complicated: Both the initial state and the final state must consist of ordinary matter: bound states of quarks and gluons known as hadrons. Only a short intermediate time is governed by a quark gluon plasma in a highly dynamic expanding environment at best in local thermal equilibrium.
Time evolution in a heavy ion experiment: From left to right: 1) two heavy ions collide headon, strongly influenced by relativistic effects (Lorentz contraction), 2) An high energy density is created in an irregular volume, 3) A QGP is formed for a very short time, cools and expands 4) hadrons (bound states of quarks are formed that further interact 5) The final state particles are separated far enough thatfurther interaction ceases (freeze out) and 6) reach the detector. (Graphics by Paul Sorensen.)
Moreover, to deposit enough energy in a large enough volume, the initial states are not the relatively simple protons, but instead the largest manageable ions at our disposal, typically gold or lead. The high energies involved lead to an immensely crowded final state which pose enormous experimental challenges.
Theory faces a long list of complementary challenges:
• The initial states are highly modified by the high energies involved in the collisions: this is a feature of QCD as a quantum field theory – the properties probed in an experiment depend strongly on the experimental conditions, a general phenomenon for which particle wave duality is but one example. The tools to describe these initial sate are provided by the Jalilian-Marian Iancu McLerran Weigert Leonidov Kovner equation (JIMWLK for short) and play a vital role in the subsequent stages of the collision.
• It is known that the system reaches (local) thermal equilibrium and can be described by hydrodynamics. The time scales for this equlibration are so short that simple single particle interactions can not explain this observation. One of the main experimental observation thus remains without a theoretical explanation. There are different attempts to think about this, some aiming to employ resummation techniques for QCD at weak coupling, others utilize strong coupling techniques based on string theory, both based on JIMWLK initial states. The QCD approach relies on a proper treatment of collective effects and instabilities, while the strong coupling approach references the string motivated AdS/CFT correspondence to step beyond a naive treatment of equilibration, encouraged by experimental results that place the QGP viscosity near the lower bound present in AdS/CFT models.
• The thermal properties of heavy ion collision and the precise value of the phase transition temperature, the critical temperature of QCD remain an important issue and numerical tools to analyze experimental results remain at a premium, THERMOS, a computer code developed at UCT is in heavy use for this purpose.
• As the system expands, cools down and hadronizes, it crosses the critical temperature once more, now in a somewhat more gentle manner. Here a good understanding of the QCD phase transition becomes essential. Ab initio QCD simulations (not done at UCT) provide a good quantitative understanding, but limited conceptual insight. A set of techniques known as QCD finite temperature sum rules can provide insight here and be linked with effective field theory techniques to provide a further angle of attack.
To quote from wikipedia: “Cosmology is the science of the origin and development of the universe. Modern cosmology is dominated by the Big Bang theory, which brings together observational astronomy and particle physics”. In this panorama, particle physics is most directly concerned with the steps before the creation of the microwave background , steps that set the stage for all further development. Direct observation is limited to the time after that, and thus biases a focus on those later times, with questions like large scale structure formation, gravitational properties of galaxies and signals travelling long distances through space time. These seemingly purely astronomical questions raise important issues that again challenge particle and string theorists with the existence of dark matter and dark energy
• Dark matter: Evidence comes from velocity profiles of galaxies, gravitational lensing observations and traces in the microwave background.
• Dark energy: Observations of remote supernovae prove that the expansion of the universe is accelerating, which can only be explained by an effect that counteracts gravitational attraction – dark energy.
Both phenomena have no satisfactory explanation either in particle physics or string theory at this point, although strong efforts are underway, with dark matter candidates often looked for in super-symmetric extensions of the standard model of particle physics, and dark energy candidates like chameleon particles being actively searched for in particle physics experiments. Cosmology itself is progressing into a phase of precision cosmology in which relativistic corrections to lensing effects start to become important and will influcence precision details of what is known as the standard model of cosmology, new experiments like the SKA will harvest unprecedented amounts of new
data that will challenge our view of the universe once more.
Mathematical Physics: Nonlinearity with PT-symmetry
Our current activities focus on nonlinear systems with parity-time symmetry. We study properties of finite-dimensional PT-symmetric models (canonical formulations, integrability, self-trapping, PT-symmetry breaking and blow-up phenomena). We are also interested in localized structures (solitons, breathers and vortices) supported by infinitely-dimensional PT-symmetric systems. This mathematical research is driven by the current wave of enthusiasm in photonics where the unidirectional light propagation in PT-symmetric materials raises hopes for the design of invisible cloaks and similar unprecedented optical behavior.
Nonlinear Dynamics: Solitons and Patterns
One line of our research concerns localized patterns on thesurface of a ferrofluid. Another area includes two-dimensionalsolitons in photonic crystals. We are traditionally active in the studies of the directly and parametrically driven damped non-linear Schrödinger equation; our work here is centred on the travelling and oscillating solitons and their complexes. Currently, the most active applications of these studies are to the ultrashort pulse syntheses in optical microresonators. In the frequency domain the single-soliton states correspond to low-noise optical frequency combs with smooth spectral envelopes, critical for broadband spectroscopy, telecommunications, astronomy and low noise microwave generation.
String theory
String theory is the leading program to combine both gravity and quantum mechanics into one consistent and unified framework. Over the past two decades we have discovered enormous riches both from the physics as well as mathematics perspectives within the framework of string theory. As members of the CTMP within the Laboratory for Quantum Gravity and Strings (QGaSLab) we are interested in both understanding the structure of string theory, discovering more about the properties of its basic, and not so basic ingredients, as well as using it as a tool to understand the world around us. The holographic principle, a direct consequence of the mathematical properties of string theory, allows us to study otherwise intractable problems which are extremely important to understand in a number of experiments within a particle physics context as well as table-top experiments within a condensed matter setting. Mysteries regarding the nature of high temperature superconductors, the nature of how quarks are confined within the centres of protons and neutrons, the nature of the quark-gluon plasma, and much more besides have been studied by international teams over the past two decades. Another aspect of our work focuses on investigating the properties of space and time using string and information theoretic mathematical methods. The topic of emergent space-time is being seen more and more as one of the most promising frontiers of string theory and there is much that we are finding about how space and time can be encoded in theories where the space we see around us is not a fundamental ingredient from the beginning but emerges from the dynamics of the theory. We have collaborations with well over 50 internationally renowned researchers in dozens of institutes around the world and we continue to attract world-class visitors. |
d52990e5549f6199 | Using classical density functional theory to unravel the complex fluid structure of guest species adsorbed in nanoporous materials
1. Using classical density functional theory to unravel the complex fluid structure of guest species adsorbed in nanoporous materials
19MODEV08 / Model and software development
Promotor(en): L. Vanduyfhuys, V. Van Speybroeck / Begeleider(s): L. Vanduyfhuys
Nanoporous materials have attracted much attention in the past decades due to their impressive potential for applications in the field of natural gas storage, detection and separation of gases, or even as effective drug delivery systems. An example of such nanoporous materials is given by the so-called metal-organic frameworks (MOFs), which are hybrid materials consisting of inorganic bricks connected to each other through organic linkers. Due to their porous nature and favorable interactions with guest species, they have been proven to exhibit extra-ordinary adsorption properties. Several modeling techniques exist to investigate these adsorption properties. On the one hand, one could consider direct molecular simulation by performing Monte Carlo in the grand canonical ensemble (GCMC). While these simulations are very accurate, they tend to be computationally expensive, which limits their applicability. On the other hand, one can express the Helmholtz free energy F in the canonical ensemble by means of a parameterized expression in the mean-field approximation.
Herein, Fvdw represents the van der Waals equation of state and ΔU the mean adsorption energy. The density is approximated to be homogeneous within the pore volume Vp, and the total number of particles adsorbed N can be estimated by means of a Legendre transformation to the grand canonical ensemble:
Such models have been proven capable of describing the adsorption isotherm of various complex metal organic frameworks at temperatures above the critical temperature. Unfortunately, these models are not applicable below the critical temperature, due to the approximation of a homogeneous density. Furthermore, they require input such as the van der Waals parameters a and b, as well as the pore volume Vp and mean-field adsorption energy ΔU, which are not straightforward to estimate from molecular simulation. It is in this context that classical Density Functional Theory provides a powerful alternative.
Density functional theory (DFT) was originally developed as a quantum mechanical approach to determine the electronic structure of many-body molecular systems at 0 K. In contrast to wave function based methods, which try to solve the Schrödinger equation directly, DFT describes the system in terms of the electronic density. A crucial quantity in this framework is the energy functional E[ρ], which represents the ground state energy of the system at 0 K for a given trial electronic density ρ with a fixed total of N electrons. By finding the density that minimizes this energy, one retrieves the true ground state density. Inspired by this modeling technique, physicists developed its classical analogue, i.e. classical density functional theory (cDFT), for the calculation of the density profile ρ(r) of classical particles subject to an external potential v(r) and an interaction potential w(r). It differs from its quantum mechanical big brother in three different aspects. First, the particles do not have to obey the quantum statistics of fermions, i.e. the Fermi-Dirac distribution, but the classical Maxwell-Boltzmann statistics. Second, we are interested in computing the density at finite temperature T>0 K. Third, the density is not constraint to contain a total number of N particles, instead the chemical potential μ is controlled. As a result, the density profile is obtained by minimizing the grand potential Ω, i.e. the thermodynamic potential in the μVT ensemble, which is expressed as a functional of a trial density ρ and minimized towards this density to find the true density:
In this thesis, the student will familiarize him/herself with classical density functional theory and implement an algorithm for the solution of the cDFT equations in a Python package. In the first stage, the implementation will first be tested on two test applications: (1) a gas/liquid without an external potential at super/subcritical temperature and (2) a gas at supercritical temperature in the presence of a gravitational field. While the first test system will serve as a model for the phase transition in bulk fluid as well as for the bulk pair correlation distribution, the second system will serve as a simplified model for the atmosphere of Earth. In the second stage, the implementation will be applied to describe the adsorption of guest species, such as methane and carbon dioxide, in metal organic frameworks such as MOF-5. As such, we aim to investigate three subjects:
1] Compare adsorption isotherms of cDFT with the mean-field model.
2] By gradually changing the temperature and investigating the evolution of the density profile for the species adsorbed inside the pores, we will investigate the impact of subcritical temperatures on the adsorption behavior and investigate the temperature dependence of the mean adsorption energy.
3] By systematically increasing the chemical potential µ, which also increases the total number of particles N adsorbed, we will investigate the correlation between the mean adsorption energy and the number of adsorbed particles in order to evaluate the mean-field assumption.
Master of Science in Engineering Physics:
This topic fits well within the following clusters of elective courses: Nano and Modeling. Physics aspects: development and application of classical Density Functional Theory, investigation of adsorbed fluid structure inside nanoporous materials. Engineering aspects: calculation of adsorption isotherms, investigation of subcritical phenomena in nanoporous materials |
5f393a325c878c10 | I'm reading section 2.2.1 of the book Solitons, Instantons and Twistors by Maciej Dunajski. The section is on the subject of direct scattering.
It is claimed that, considering Schrodinger's equation with the class of potentials $u(x)$ such that $|u(x)|\rightarrow 0$ as $x\rightarrow \pm\infty$, the integral condition
$$\int_{\mathbb{R}} \left(1+|x|\right) \left|u(x)\right| dx<\infty$$
guarantees that there exists only a finite number of discrete energy levels.
I'm failing to understand in what way this condition guarantees this. Help is really appreciated.
It follows from Bargmann's limit. The Bargmann bound is well known for a three-dimensional central potential [1, Thm. XIII.9] [2,3]: $$ N(\ell) < \frac{1}{2\ell + 1} \int_0^\infty r V^-(r) dr $$ Here, $\ell$ is the azimuthal quantum number, $N(\ell)$ the number of bound eigenstates with this quantum number and $V^-(r) = \max\{0,-V(r)\}$.
With a trick, we can apply this to a one-dimensional potential [2, Section III] [1, Problem XIII.22]. The trick is that the 1D Schrödinger equation $$ (-\partial_x^2 + u) \psi(x) = E\psi(x) $$ is equal to the $\ell=0$ radial Schrödinger equation $$ (-\partial_r^2 + V(r)) \psi_{rad}(r) = E\psi_{rad}(r) $$ except for the $\psi_{rad}(0) = 0$ boundary condition.
Also, we need to know that the number of negative energy bound states equals the number of nodes of the zero-energy wave function, and the same holds in the 3D case for the number of nodes of the zero-energy radial wave function. Long story short, we need to count the nodes of the solution to $$ (-\partial_x^2 + u) \psi(x) = 0 . $$ And we can do so by splitting the problem into two parts that look like counting the negative energy bound states of 3D radial problems. This is explained in detail in [2, Section III], the result is that $$ \boxed{N < 1 + \int_{-\infty}^\infty |x| u^-(x) dx } $$ in the 1D-case.
Finally, apply this to your case. Note that $0 \leq |x| \leq |x|+1$ and $0 \leq u^-(x) \leq |u(x)|$. Hence, $$ N < 1 + \int_{-\infty}^\infty (1 + |x|)\, |u(x)|\; dx < \infty , $$ there is a finite number of bound eigenstates.
[1] M. Reed, B. Simon: Methods of Modern Mathematical Physics 4: Analysis of Operators.
[2] K. Chadan, N.N. Khuri, A. Martin, T.T. Wu: Bound States in one and two Spatial Dimensions (arXiv:math-ph/0208011)
[3] https://en.wikipedia.org/wiki/Bargmann%27s_limit
• 1
$\begingroup$ Let me comment on your additional conditions that I didn't use: $u \in L^1$ and $u \to 0$ for $x \to \pm\infty$. I assume that those guarantee applicability of Bargmann's limit. But to be honest, I'm not sure about the precise conditions, and neither [1] nor [2] state them for the 1D case. I think, $u \in L^1$ could imply $V(\vec x)$ is in $R + L^\infty$ as required for the 3D theorem. $u \to 0$ is probably needed to have negative discrete and positive continuous spectrum in the first place. Maybe someone else knows more about that... $\endgroup$ – Noiralef May 9 '17 at 18:27
Your Answer
|
b2a344809ce5937f | We gratefully acknowledge support from
the Simons Foundation and member institutions.
New submissions
[ total of 275 entries: 1-275 ]
[ showing up to 500 entries per page: fewer | more ]
New submissions for Thu, 2 Apr 20
[1] arXiv:2004.00028 [pdf, other]
Title: Proper 3-colorings of $\mathbb{Z}^2$ are Bernoulli
Comments: 22 pages, 2 figures
Subjects: Probability (math.PR); Dynamical Systems (math.DS)
We consider the unique measure of maximal entropy for proper 3-colorings of $\mathbb{Z}^2$, or equivalently, the so-called zero-slope Gibbs measure. Our main result is that this measure is Bernoulli, or equivalently, that it can be expressed as the image of a translation-equivariant function of independent and identically distributed random variables placed on $\mathbb{Z}^2$. Along the way, we obtain various estimates on the mixing properties of this measure.
[2] arXiv:2004.00035 [pdf, ps, other]
Title: Dense induced subgraphs of dense bipartite graphs
Authors: Rose McCarty
Comments: 10 pages
Subjects: Combinatorics (math.CO)
We prove that every bipartite graph of sufficiently large average degree has either a $K_{t,t}$-subgraph or an induced subgraph of average degree at least $t$ and girth at least $6$. We conjecture that "$6$" can be replaced by "$k$", which strengthens a conjecture of Thomassen. In support of this conjecture, we show that it holds for regular graphs.
[3] arXiv:2004.00041 [pdf, other]
Title: Likelihood landscape and maximum likelihood estimation for the discrete orbit recovery model
Subjects: Statistics Theory (math.ST); Optimization and Control (math.OC); Probability (math.PR); Computation (stat.CO); Methodology (stat.ME)
We study the non-convex optimization landscape for maximum likelihood estimation in the discrete orbit recovery model with Gaussian noise. This model is motivated by applications in molecular microscopy and image processing, where each measurement of an unknown object is subject to an independent random rotation from a rotational group. Equivalently, it is a Gaussian mixture model where the mixture centers belong to a group orbit.
We show that fundamental properties of the likelihood landscape depend on the signal-to-noise ratio and the group structure. At low noise, this landscape is "benign" for any discrete group, possessing no spurious local optima and only strict saddle points. At high noise, this landscape may develop spurious local optima, depending on the specific group. We discuss several positive and negative examples, and provide a general condition that ensures a globally benign landscape. For cyclic permutations of coordinates on $\mathbb{R}^d$ (multi-reference alignment), there may be spurious local optima when $d \geq 6$, and we establish a correspondence between these local optima and those of a surrogate function of the phase variables in the Fourier domain.
We show that the Fisher information matrix transitions from resembling that of a single Gaussian in low noise to having a graded eigenvalue structure in high noise, which is determined by the graded algebra of invariant polynomials under the group action. In a local neighborhood of the true object, the likelihood landscape is strongly convex in a reparametrized system of variables given by a transcendence basis of this polynomial algebra. We discuss implications for optimization algorithms, including slow convergence of expectation-maximization, and possible advantages of momentum-based acceleration and variable reparametrization for first- and second-order descent methods.
[4] arXiv:2004.00042 [pdf, ps, other]
Title: The Kähler-Ricci flow, holomorphic vector fields and Fano bundles
Authors: Xi Sisi Shen
Comments: 17 pages
Subjects: Differential Geometry (math.DG)
We study the behavior of the K\"ahler-Ricci flow on compact manifolds developing finite-time singularities, in particular, when the flow contracts exceptional divisors or collapses the Fano fibers of a holomorphic fiber bundle. We present a technique using holomorphic vector fields to prove estimates related to the work of Song-Weinkove and Fu-Zhang.
[5] arXiv:2004.00045 [pdf, ps, other]
Title: Kazhdan-Lusztig polynomials and subexpressions
Comments: 10 pages
Subjects: Combinatorics (math.CO); Representation Theory (math.RT)
We refine an idea of Deodhar, whose goal is a counting formula for Kazhdan-Lusztig polynomials. This is a consequence of a simple observation that one can use the solution of Soergel's conjecture to make ambiguities involved in defining certain morphisms between Soergel bimodules in characteristic zero (double leaves) disappear.
[6] arXiv:2004.00052 [pdf, ps, other]
Title: The integral Chow ring of the stack of smooth non-hyperelliptic curves of genus three
Comments: 40 pages, comments are welcome!
Subjects: Algebraic Geometry (math.AG)
We compute the integral Chow ring of the stack of smooth, non-hyperelliptic curves of genus three. We obtain this result by computing the integral Chow ring of the stack of smooth plane quartics, by means of equivariant intersection theory.
[7] arXiv:2004.00054 [pdf, ps, other]
Title: On eternal mean curvature flows of tori in perturbations of the unit sphere
Subjects: Differential Geometry (math.DG)
We construct eternal mean curvature flows of tori in perturbations of the standard unit sphere $\Bbb{S}^3$. This has applications to the study of the Morse homologies of area functionals over the space of embedded tori in $\Bbb{S}^3$.
[8] arXiv:2004.00059 [pdf, ps, other]
Title: The global extended-rational Arnoldi method for matrix function approximation
Subjects: Numerical Analysis (math.NA)
The numerical computation of matrix functions such as $f(A)V$, where $A$ is an $n\times n$ large and sparse square matrix, $V$ is an $n \times p$ block with $p\ll n$ and $f$ is a nonlinear matrix function, arises in various applications such as network analysis ($f(t)=exp(t)$ or $f(t)=t^3)$, machine learning $(f(t)=log(t))$, theory of quantum chromodynamics $(f(t)=t^{1/2})$, electronic structure computation, and others. In this work, we propose the use of global extended-rational Arnoldi method for computing approximations of such expressions. The derived method projects the initial problem onto an global extended-rational Krylov subspace $\mathcal{RK}^{e}_m(A,V)=\text{span}(\{\prod\limits_{i=1}^m(A-s_iI_n)^{-1}V,\ldots,(A-s_1I_n)^{-1}V,V$ $,AV, \ldots,A^{m-1}V\})$ of a low dimension. An adaptive procedure for the selection of shift parameters $\{s_1,\ldots,s_m\}$ is given. The proposed method is also applied to solve parameter dependent systems. Numerical examples are presented to show the performance of the global extended-rational Arnoldi for these problems.
[9] arXiv:2004.00063 [pdf, other]
Title: Analytic representation of the generalized Pascal snail and its applications
Authors: S. Kanas, V. S. Masih
Subjects: Complex Variables (math.CV)
We find an unifying approach to the analytic representation of the domain bounded by a generalized Pascal snail. Special cases as Pascal snail, Both leminiscate, conchoid of the Sluze and a disc are included. The behavior of functions related to generalized Pascal snail are demonstrated.
[10] arXiv:2004.00075 [pdf, ps, other]
Title: Ascoli and sequentially Ascoli spaces
Authors: Saak Gabriyelyan
Subjects: General Topology (math.GN)
A Tychonoff space $X$ is called ({\em sequentially}) {\em Ascoli} if every compact subset (resp. convergent sequence) of $C_k(X)$ is evenly continuous, where $C_k(X)$ denotes the space of all real-valued continuous functions on $X$ endowed with the compact-open topology. Various properties of (sequentially) Ascoli spaces are studied, and we give several characterizations of sequentially Ascoli spaces. Strengthening a result of Arhangel'skii we show that a hereditary Ascoli space is Fr\'{e}chet--Urysohn. A locally compact abelian group $G$ with the Bohr topology is sequentially Ascoli iff $G$ is compact. If $X$ is totally countably compact or near sequentially compact then it is a sequentially Ascoli space. The product of a locally compact space and an Ascoli space is Ascoli. If additionally $X$ is a $\mu$-space, then $X$ is locally compact iff the product of $X$ with any Ascoli space is an Ascoli space. Extending one of the main results of [18] and [16] we show that $C_p(X)$ is sequentially Ascoli iff $X$ has the property $(\kappa)$. We give a necessary condition on $X$ for which the space $C_k(X)$ is sequentially Ascoli. For every metrizable abelian group $Y$, $Y$-Tychonoff space $X$, and nonzero countable ordinal $\alpha$, the space $B_\alpha(X,Y)$ of Baire-$\alpha$ functions from $X$ to $Y$ is $\kappa$-Fr\'{e}chet--Urysohn and hence Ascoli.
[11] arXiv:2004.00083 [pdf, other]
Title: A new envelope function for nonsmooth DC optimization
Subjects: Optimization and Control (math.OC)
Difference-of-convex (DC) optimization problems are shown to be equivalent to the minimization of a Lipschitz-differentiable "envelope". A gradient method on this surrogate function yields a novel (sub)gradient-free proximal algorithm which is inherently parallelizable and can handle fully nonsmooth formulations. Newton-type methods such as L-BFGS are directly applicable with a classical linesearch. Our analysis reveals a deep kinship between the novel DC envelope and the forward-backward envelope, the former being a smooth and convexity-preserving nonlinear reparametrization of the latter.
[12] arXiv:2004.00084 [pdf, ps, other]
Title: Minimum Quantum Degrees for Isotropic Grassmannians in Types B and C
Subjects: Algebraic Geometry (math.AG)
We give a formula in terms of Young diagrams to calculate the minimum positive integer $d$ such that $q^d$ appears in the quantum product of two Schubert classes for the Type A Grassmannian and for the isotropic submaximal Grassmannians in Types B and C. We do this by studying curve neighborhoods. We compute curve neighborhoods in several combinatorial models including $k$-strict partitions and a set of partitions where their inclusion is compatible with the Bruhat order.
[13] arXiv:2004.00090 [pdf, ps, other]
Title: Automatic Conjecturing and Proving of Exact Values of Some Infinite Families of Infinite Continued Fractions
Comments: 14 pages
Subjects: Number Theory (math.NT)
Inspired by the recent pioneering work, dubbed "The Ramanujan Machine" by Raayoni et al. (arXiv:1907.00205), we (automatically) [rigorously] prove some of their conjectures regarding the exact values of some specific infinite continued fractions, and generalize them to evaluate infinite families (naturally generalizing theirs). Our work complements their beautiful approach, since we use symbolic rather than numeric computations, and we instruct the computer to not only discover such evaluations, but at the same time prove them rigorously.
[14] arXiv:2004.00093 [pdf, ps, other]
Title: On the nonlocal Cahn--Hilliard equation with nonlocal dynamic boundary condition and boundary penalization
The Cahn--Hilliard equation is one of the most common models to describe phase segregation processes in binary mixtures. In recent times, various dynamic boundary conditions have been introduced to model interactions of the materials with the boundary more precisely. To take long-range interactions of the materials into account, we propose a new model consisting of a nonlocal Cahn--Hilliard equation subject to a nonlocal dynamic boundary condition that is also of Cahn--Hilliard type and contains an additional boundary penalization term. We rigorously derive our model as the gradient flow of a nonlocal total free energy with respect to a suitable inner product of order $H^{-1}$ which contains both bulk and surface contributions. The total free energy is considered as nonlocal since it comprises convolutions in the bulk and on the surface of the phase-field variables with certain interaction kernels. The main difficulties arise from defining a suitable kernel on the surface and from handling the resulting boundary convolution. In the main model, the chemical potentials in the bulk and on the surface are coupled by a Robin type boundary condition depending on a specific relaxation parameter related to the rate of chemical reactions. We prove weak and strong well-posedness of this system, and we investigate the singular limits attained when the relaxation parameter tends to zero or infinity. By this approach, we also obtain weak and strong well-posedness of the corresponding limit systems.
[15] arXiv:2004.00097 [pdf, ps, other]
Title: Lifting isometries of orbit spaces
Comments: 7 pages
Subjects: Metric Geometry (math.MG); Differential Geometry (math.DG); Representation Theory (math.RT)
Given an orthogonal representation of a compact group, we show that any element of the connected component of the isometry group of the orbit space lifts to an equivariant isometry of the original Euclidean space.
[16] arXiv:2004.00099 [pdf, ps, other]
Title: Superposition and mimicking theorems for conditional McKean-Vlasov equations
Comments: 55 pages
We consider conditional McKean-Vlasov stochastic differential equations (SDEs), such as the ones arising in the large-system limit of mean field games and particle systems with mean field interactions when common noise is present. The conditional time-marginals of the solutions to these SDEs satisfy non-linear stochastic partial differential equations (SPDEs) of the second order, whereas the laws of the conditional time-marginals follow Fokker-Planck equations on the space of probability measures. We prove two superposition principles: The first establishes that any solution of the SPDE can be lifted to a solution of the conditional McKean-Vlasov SDE, and the second guarantees that any solution of the Fokker-Planck equation on the space of probability measures can be lifted to a solution of the SPDE. We use these results to obtain a mimicking theorem which shows that the conditional time-marginals of an Ito process can be emulated by those of a solution to a conditional McKean-Vlasov SDE with Markovian coefficients. This yields, in particular, a tool for converting open-loop controls into Markovian ones in the context of controlled McKean-Vlasov dynamics.
[17] arXiv:2004.00106 [pdf, ps, other]
Title: Orthonormal bases on $L^2(\mathbb{R}^+)$
Subjects: Functional Analysis (math.FA)
Explicit form of eigenvectors of selfadjoint extension $H_\xi$, parametrized by $\xi \in \langle 0,\pi),$ of differential expression $ H=-\frac{d^2 }{d x^2} + \frac{x^2 }{4}$ considered on the space $L^2(\mathbb R^+)$ are derived together with spectrum $\sigma(H_\xi).$ For each $\xi$ the set of eigenvectors form orthonormal basis of $L^2(\mathbb R^+).$
[18] arXiv:2004.00109 [pdf, ps, other]
Title: A Howe correspondence for the algebra of the $\mathfrak{osp}(1|2)$ Clebsch-Gordan coefficients
Comments: 12 pages
Two descriptions of the dual $-1$ Hahn algebra are presented and shown to be related under Howe duality. The dual pair involved is formed by the Lie algebra $\mathfrak{o}(4)$ and the Lie superalgebra $\mathfrak{osp}(1|2)$.
[19] arXiv:2004.00112 [pdf, ps, other]
Title: K-theoretic Tutte polynomials of morphisms of matroids
Comments: 26 pages; comments welcome!
Subjects: Combinatorics (math.CO); Algebraic Geometry (math.AG)
We generalize the Tutte polynomial of a matroid to a morphism of matroids via the K-theory of flag varieties. We show that there are two different generalizations, and demonstrate that each has its own merits, where the trade-off is between the ease of combinatorics and geometry. One generalization recovers the Las Vergnas Tutte polynomial of a morphism of matroids, which admits a corank-nullity formula and a deletion-contraction recursion. The other generalization does not, but better reflects the geometry of flag varieties.
[20] arXiv:2004.00141 [pdf, ps, other]
Title: Almost-orthogonality principles for certain directional maximal functions
Authors: Jongchon Kim
Comments: 11 pages
Subjects: Classical Analysis and ODEs (math.CA)
We develop almost-orthogonality principles for maximal functions associated with averages over line segments and directional singular integrals. Using them, we obtain sharp $L^2$-bounds for these maximal functions when the underlying direction set is equidistributed in $\mathbb{S}^{n-1}$.
[21] arXiv:2004.00143 [pdf, ps, other]
Title: Controllability of degenerate parabolic equation with memory
Comments: 29 pages
In this paper, we analyze the null controllability property for a degenerate parabolic equation involving memory terms with a locally distributed control. We first derive a null controllability result for a nonhomogeneous degenerate heat equation via new Carleman estimates with weighted time functions that do not blow up at t= 0. Then this result is successfully used with a classical fixed point to obtain null controllability for the initial memory system.
[22] arXiv:2004.00145 [pdf, ps, other]
Title: Supersymmetric Cluster Expansions and Applications to Random Schrödinger Operators
Authors: Luca Fresta
Comments: 36 pages, 1 figure
We study discrete random Schr\"odinger operators via the supersymmetric formalism. We develop a cluster expansion that converges at both strong and weak disorder. We prove the exponential decay of the disorder-averaged Green's function and the smoothness of the local density of states either at weak disorder and at energies in proximity of the unperturbed spectrum or at strong disorder and at any energy. As an application, we establish Lifshitz-tail-type estimates for the local density of states and thus localization at weak disorder.
[23] arXiv:2004.00148 [pdf, other]
Title: Petal Projections, Knot Colorings and Determinants
Comments: 25 pages, 19 figures, 9 tables
Subjects: Geometric Topology (math.GT)
An \"{u}bercrossing diagram is a knot diagram with only one crossing that may involve more than two strands of the knot. Such a diagram without any nested loops is called a petal projection. Every knot has a petal projection from which the knot can be recovered using a permutation that represents strand heights. Using this permutation, we give an algorithm that determines the $p$-colorability and the determinants of knots from their petal projections. In particular, we compute the determinants of all prime knots with crossing number less than $10$ from their petal permutations.
[24] arXiv:2004.00153 [pdf, other]
Title: In-situ adaptive reduction of nonlinear multiscale structural dynamics models
Comments: 22 pages, 7 figures
Subjects: Numerical Analysis (math.NA)
Conventional offline training of reduced-order bases in a predetermined region of a parameter space leads to parametric reduced-order models that are vulnerable to extrapolation. This vulnerability manifests itself whenever a queried parameter point lies in an unexplored region of the parameter space. This paper addresses this issue by presenting an in-situ, adaptive framework for nonlinear model reduction where computations are performed by default online, and shifted offline as needed. The framework is based on the concept of a database of local Reduced-Order Bases (ROBs), where locality is defined in the parameter space of interest. It achieves accuracy by updating on-the-fly a pre-computed ROB, and approximating the solution of a dynamical system along its trajectory using a sequence of most-appropriate local ROBs. It achieves efficiency by managing the dimension of a local ROB, and incorporating hyperreduction in the process. While sufficiently comprehensive, the framework is described in the context of dynamic multiscale computations in solid mechanics. In this context, even in a nonparametric setting of the macroscale problem and when all offline, online, and adaptation overhead costs are accounted for, the proposed computational framework can accelerate a single three-dimensional, nonlinear, multiscale computation by an order of magnitude, without compromising accuracy.
[25] arXiv:2004.00155 [pdf, ps, other]
Title: On $Γ-$Convergence of a Variational Model for Lithium-Ion Batteries
Authors: Kerrek Stinson
Comments: 47 pages, 7 figures
Subjects: Analysis of PDEs (math.AP)
A singularly perturbed phase field model used to model lithium-ion batteries including chemical and elastic effects is considered. The underlying energy is given by $$I_\epsilon [u,c ] := \int_\Omega \left( \frac{1}{\epsilon} f(c) + \epsilon\|\nabla c\|^2 + \frac{1}{\epsilon}\mathbb{C} (e(u)-ce_0) : (e(u)-ce_0)\right) dx, $$ where $f$ is a double well potential, $\mathbb{C}$ is a symmetric positive definite fourth order tensor, $c$ is the normalized lithium-ion density, and $u$ is the material displacement. The integrand contains elements close to those in energy functionals arising in both the theory of fluid-fluid and solid-solid phase transitions. For a strictly star-shaped, Lipschitz domain $\Omega \subset \mathbb{R}^2,$ it is proven that $\Gamma - \lim_{\epsilon\to 0} I_\epsilon = I_0,$ where $I_0$ is finite only for pairs $(u,c)$ such that $f(c) = 0$ and the symmetrized gradient $e(u) = ce_0$ almost everywhere. Furthermore, $I_0$ is characterized as the integral of an anisotropic interfacial energy density over sharp interfaces given by the jumpset of $c.$
[26] arXiv:2004.00157 [pdf, ps, other]
Title: Improved quantitative unique continuation for complex-valued drift equations in the plane
Subjects: Analysis of PDEs (math.AP)
In this article, we investigate the quantitative unique continuation properties of complex-valued solutions to drift equations in the plane. We consider equations of the form $\Delta u + W \cdot \nabla u = 0$ in $\mathbb{R}^2$, where $W = W_1 + i W_2$ with each $W_j$ real-valued. Under the assumptions that $W_j \in L^{q_j}$ for some $q_1 \in [2, \infty]$, $q_2 \in (2, \infty]$, and $W_2$ exhibits rapid decay at infinity, we prove new global unique continuation estimates. This improvement is accomplished by reducing our equations to vector-valued Beltrami systems. Our results rely on a novel order of vanishing estimate combined with a finite iteration scheme.
[27] arXiv:2004.00162 [pdf, ps, other]
Title: Derivative martingale of the branching Brownian motion in dimension $d \geq 1$
Subjects: Probability (math.PR)
We consider a branching Brownian motion in $\mathbb{R}^d$. We prove that there exists a random subset $\Theta$ of $\mathbb{S}^{d-1}$ such that the limit of the derivative martingale exists simultaneously for all directions $\theta \in \Theta$ almost surely. This allows us to define a random measure on $\mathbb{S}^{d-1}$ whose density is given by the derivative martingale.
The proof is based on first moment arguments: we approximate the martingale of interest by a series of processes, which do not take into account the particles that travelled too far away. We show that these new processes are uniformly integrable martingales whose limits can be made to converge to the limit of the original martingale.
[28] arXiv:2004.00166 [pdf, other]
Title: Worst-Case Risk Quantification under Distributional Ambiguity using Kernel Mean Embedding in Moment Problem
Subjects: Optimization and Control (math.OC); Machine Learning (cs.LG); Systems and Control (eess.SY)
In order to anticipate rare and impactful events, we propose to quantify the worst-case risk under distributional ambiguity using a recent development in kernel methods -- the kernel mean embedding. Specifically, we formulate the generalized moment problem whose ambiguity set (i.e., the moment constraint) is described by constraints in the associated reproducing kernel Hilbert space in a nonparametric manner. We then present the tractable optimization formulation and its theoretical justification. As a concrete application, we numerically test the proposed method in characterizing the worst-case constraint violation probability in the context of a constrained stochastic control system.
[29] arXiv:2004.00169 [pdf, ps, other]
Title: Classification of Frobenius, two-step solvable Lie poset algebras
Subjects: Rings and Algebras (math.RA)
We show that the isomorphism class of a two-step solvable Lie poset subalgebra of a semisimple Lie algebra is determined by its dimension. We further establish that all such algebras are absolutely rigid.
[30] arXiv:2004.00172 [pdf, ps, other]
Title: Graphs with few trivial characteristic ideals
Comments: 21 pages
Subjects: Combinatorics (math.CO)
We give a characterization of the graphs with at most three trivial characteristic ideals. This implies the complete characterization of the regular graphs whose critical groups have at most three invariant factors equal to 1 and the characterization of the graphs whose Smith groups have at most 3 invariant factors equal to 1. We also give an alternative and simpler way to obtain the characterization of the graphs whose Smith groups have at most 3 invariant factors equal to 1, and a list of minimal forbidden graphs for the family of graphs with Smith group having at most 4 invariant factors equal to 1.
[31] arXiv:2004.00174 [pdf, ps, other]
Title: On a question of Slaman and Steel
Subjects: Logic (math.LO)
We consider an old question of Slaman and Steel: whether Turing equivalence is an increasing union of Borel equivalence relations none of which contain a uniformly computable infinite sequence. We show this question is deeply connected to problems surrounding Martin's conjecture, and also in countable Borel equivalence relations. In particular, if Slaman and Steel's question has a positive answer, it implies there is a universal countable Borel equivalence which is not uniformly universal, and that there is a $(\equiv_T,\equiv_m)$-invariant function which is not uniformly invariant on any pointed perfect set.
[32] arXiv:2004.00177 [pdf, ps, other]
Title: Large-scale behavior of a particle system with mean-field interaction: Traveling wave solutions
Comments: 23 pages
Subjects: Probability (math.PR)
We use probabilistic methods to study properties of mean-field models, arising as large-scale limits of certain particle systems with mean-field interaction. The underlying particle system is such that $n$ particles move forward on the real line. Specifically, each particle "jumps forward" at some time points, with the instantaneous rate of jumps given by a decreasing function of the particle's location quantile within the overall distribution of particle locations. A mean-field model describes the evolution of the particles' distribution, when $n$ is large. It is essentially a solution to an integro-differential equation within a certain class. Our main results concern the existence and uniqueness of -- and attraction to -- mean-field models which are traveling waves, under general conditions on the jump-rate function and the jump-size distribution.
[33] arXiv:2004.00183 [pdf, ps, other]
Title: Littlewood Complexes for Symmetric Groups
Authors: Christopher Ryba
Comments: 9 pages
Subjects: Representation Theory (math.RT)
We construct a complex $\mathcal{L}_\bullet^\lambda$ resolving the irreducible representations $\mathcal{S}^{\lambda[n]}$ of the symmetric groups $S_n$ by representations restricted from $GL_n(k)$. This construction lifts to $\mathrm{Rep}(S_\infty)$, where it yields injective resolutions of simple objects. It categorifies stable Specht polynomials, and allows us to understand evaluations of these polynomials for all $n$.
[34] arXiv:2004.00187 [pdf, other]
Title: Internal split opfibrations and cofunctors
Authors: Bryce Clarke
Comments: 25 pages
Subjects: Category Theory (math.CT)
Split opfibrations are functors equipped with a suitable choice of opcartesian lifts. The purpose of this paper is to characterise internal split opfibrations through separating the structure of a suitable choice of lifts from the property of these lifts being opcartesian. The underlying structure of an internal split opfibration is captured by an internal functor equipped with an internal cofunctor, while the property may be expressed as a pullback condition, akin to the simple condition on an internal functor to be an internal discrete opfibration. Furthermore, this approach provides two additional characterisations of internal split opfibrations, via the d\'ecalage construction and strict factorisation systems. For small categories, this theory clarifies several aspects of delta lenses which arise in computer science.
[35] arXiv:2004.00189 [pdf, ps, other]
Title: Central elements in affine mod $p$ Hecke algebras via perverse $\mathbb{F}_p$ sheaves
Authors: Robert Cass
Comments: 25 pages
Let $G$ be a split connected reductive group over a finite field of characteristic $p > 2$ such that $G_\text{der}$ is simple. We give a geometric construction of perverse mod $p$ sheaves on the Iwahori affine flag variety of $G$ which are central with respect to the convolution product. We deduce an explicit formula for an isomorphism from the spherical mod $p$ Hecke algebra to the center of the Iwahori mod $p$ Hecke algebra. We also give a formula for the central integral Bernstein elements in the Iwahori mod $p$ Hecke algebra. To accomplish these goals we construct a nearby cycles functor for perverse $\mathbb{F}_p$ sheaves and we use Frobenius splitting techniques to prove some properties of this functor. We also prove that certain equal characteristic analogues of local models of Shimura varieties are strongly $F$-regular, and hence they are $F$-rational and have pseudo-rational singularities.
[36] arXiv:2004.00192 [pdf, ps, other]
Title: Instances of Computational Optimal Recovery: Dealing with Observation Errors
Subjects: Optimization and Control (math.OC)
When attempting to recover functions from observational data, one naturally seeks to do so in an optimal manner with respect to some modeling assumption. With a focus put on the worst-case setting, this is the standard goal of Optimal Recovery. The distinctive twists here are the consideration of inaccurate data through some boundedness models and the emphasis on computational realizability. Several scenarios are unraveled through the efficient constructions of optimal recovery maps: local optimality under linearly or semidefinitely describable models, global optimality for the estimation of linear functionals under approximability models, and global near-optimality under approximability models in the space of continuous functions.
[37] arXiv:2004.00193 [pdf, ps, other]
Title: Cells in affine q-Schur algebras
Comments: 31 pages
Subjects: Representation Theory (math.RT)
We develop algebraic and geometrical approaches toward canonical bases for affine q-Schur algebras of arbitrary type introduced in this paper. A duality between an affine q-Schur algebra and a corresponding affine Hecke algebra is established. We introduce an inner product on the affine q-Schur algebra, with respect to which the canonical basis is shown to be positive and almost orthonormal. We then formulate the cells and asymptotic forms for affine q-Schur algebras, and develop their basic properties analogous to the cells and asymptotic forms for affine Hecke algebras established by Lusztig. The results on cells and asymptotic algebras are also valid for q-Schur algebras of arbitrary finite type.
[38] arXiv:2004.00195 [pdf, ps, other]
Title: Instances of Computational Optimal Recovery: Refined Approximability Models
Authors: Simon Foucart
Subjects: Optimization and Control (math.OC); Functional Analysis (math.FA); Numerical Analysis (math.NA)
Models based on approximation capabilities have recently been studied in the context of Optimal Recovery. These models, however, are not compatible with overparametrization, since model- and data-consistent functions could then be unbounded. This drawback motivates the introduction of refined approximability models featuring an added boundedness condition. Thus, two new models are proposed in this article: one where the boundedness applies to the target functions (first type) and one where the boundedness applies to the approximants (second type). For both types of model, optimal maps for the recovery of linear functionals are first described on an abstract level before their efficient constructions are addressed. By exploiting techniques from semidefinite programming, these constructions are explicitly carried out on a common example involving polynomial subspaces of $\mathcal{C}[-1,1]$.
[39] arXiv:2004.00205 [pdf, ps, other]
Title: A relative spannedness for log canonical pairs and quasi-log canonical pairs
Authors: Osamu Fujino
Comments: 19 pages
Subjects: Algebraic Geometry (math.AG)
We establish a relative spannedness for log canonical pairs, which is a generalization of the basepoint-freeness for varieties with log-terminal singularities by Andreatta--Wi\'sniewski. Moreover, we establish a generalization for quasi-log canonical pairs.
[40] arXiv:2004.00209 [pdf, other]
Title: Inventory Loops (i.e. Counting Sequences) have Pre-period $2\max S_1+60$
Comments: 27 pages, 18 figures, code available
Subjects: Number Theory (math.NT); Combinatorics (math.CO); History and Overview (math.HO)
An Inventory Sequence $(S_0, S_1, S_2, ...)$ is the iteration of the map $f$ defined roughly by taking an integer to its numericized description (e.g. $f(1381)=211318$ since "$1381$" has two $1$'s, one $3$, and one $8$). Our work analyzes the iteration under the infinite base. Any starting value of positive digits is known to be ultimately periodic [1] (e.g. $S_0=1381$ reaches the 1-cycle $f(3122331418)=3122331418$). Parametrizations of all possible cycles are also known [2,3]. We answer Bronstein and Fraenkel's open question of 26 years showing the pre-period of any such starting value is no more than $2M+60$ where $M=\max S_1$. And oddly the period of the cycle can be determined after only $O(\log\log M)$ iterations.
[41] arXiv:2004.00228 [pdf, ps, other]
Title: Ultralocally Closed Clones
Comments: 24 pages, 2 figures
Subjects: Logic (math.LO)
Given a clone C on a set A, we characterize the clone of operations on A which are local term operations of every ultrapower of the algebra $(A; C)$.
[42] arXiv:2004.00233 [pdf, ps, other]
Title: An irreducible class of polynomials over integers
Comments: 12 pages
Subjects: Number Theory (math.NT)
In this article, we consider polynomials of the form $f(x)=a_0+a_{n_1}x^{n_1}+a_{n_2}x^{n_2}+\dots+a_{n_r}x^{n_r}\in \mathbb{Z}[x],$ where $|a_0|\ge |a_{n_1}|+\dots+|a_{n_r}|,$ $|a_0|$ is a prime power and $|a_0|\nmid |a_{n_1}a_{n_r}|$. We will show that under the strict inequality these polynomials are irreducible for certain values of $n_1$. In the case of equality, apart from its cyclotomic factors, they have exactly one irreducible non-reciprocal factor.
[43] arXiv:2004.00236 [pdf, ps, other]
Title: Global strong solutions to the inhomogeneous incompressible Navier-Stokes system in the exterior of a cylinder
Subjects: Analysis of PDEs (math.AP)
In this paper, the global strong axisymmetric solutions for the inhomogeneous incompressible Navier-Stokes system are established in the exterior of a cylinder subject to the Dirichlet boundary conditions. Moreover, the vacuum is allowed in these solutions. One of the key ingredients of the analysis is to obtain the ${L^{2}(s,T;L^{\infty}(\Omega))}$ bound for the velocity field, where the axisymmetry of the solutions plays an important role.
[44] arXiv:2004.00237 [pdf, ps, other]
Title: The space $D$ in several variables: random variables and higher moments
Authors: Svante Janson
Comments: 34 pages
Subjects: Probability (math.PR); Functional Analysis (math.FA)
We study the Banach space $D([0,1]^m)$ of functions of several variables that are (in a certain sense) right-continuous with left limits, and extend several results previously known for the standard case $m=1$. We give, for example, a description of the dual space, and we show that a bounded multilinear form always is measurable with respect to the $\sigma$-field generated by the point evaluations. These results are used to study random functions in the space. (I.e., random elements of the space.) In particular, we give results on existence of moments (in different senses) of such random functions, and we give an application to the Zolotarev distance between two such random functions.
[45] arXiv:2004.00246 [pdf, ps, other]
Title: On minimal model theory for algebraic log surfaces
Authors: Osamu Fujino
Comments: 10 pages, this is an expanded version of Section 13 in arXiv:1908.00170v1 [math.AG]
Subjects: Algebraic Geometry (math.AG)
We introduce the notion of generalized MR log canonical surfaces and establish the minimal model theory for generalized MR log canonical surfaces in full generality.
[46] arXiv:2004.00252 [pdf, ps, other]
Title: Higher representation stability for ordered configuration spaces and twisted commutative factorization algebras
Authors: Quoc P. Ho
Comments: Comments are welcome!
For a scheme or topological space $X$, we realize the rational cohomology groups of its generalized ordered configuration spaces as the factorization homology of $X$ with coefficients in certain truncated twisted commutative algebras. Via Koszul duality, these cohomology groups can be computed in terms of Lie algebra cohomology of certain twisted dg-Lie algebras which are explicit in many cases.
As a direct consequence, we prove that when $X$ satisfies $\mathrm{T}_m$, a condition regarding the vanishing of certain $C_\infty$-operations on the rational cohomology of $X$, the rational cohomology of its ordered configuration spaces form a free module over a twisted commutative algebra built out of the ordered configuration spaces of the affine space. This generalizes a result of Church--Ellenberg--Farb on the freeness of $\mathrm{FI}$-modules arising from the cohomology of ordered configuration spaces of open manifolds (which corresponds to the case where $m=1$) and, moreover, establishes higher representation stability for ordered configuration spaces, resolving the various conjectures of Miller--Wilson in this case. More generally, we provide an iterative procedure to study higher representation stability and compute explicit bounds for the derived indecomposables, i.e. higher analogs of $\mathrm{FI}$-hyperhomology, for generalized configuration spaces.
[47] arXiv:2004.00264 [pdf, ps, other]
Title: Beurling type invariant subspaces of composition operators
Comments: 12 pages
Let $\mathbb{D}$ be the open unit disk in $\mathbb{C}$, let $H^2$ denote the Hardy space on $\mathbb{D}$ and let $\varphi : \mathbb{D} \rightarrow \mathbb{D}$ be a holomorphic self map of $\mathbb{D}$. The composition operator $C_{\varphi}$ on $H^2$ is defined by \[ (C_{\varphi} f)(z)=f(\varphi(z)) \quad \quad (f \in H^2,\, z \in \mathbb{D}). \] Denote by $\mathcal{S}(\mathbb{D})$ the set of all functions that are holomorphic and bounded by one in modulus on $\mathbb{D}$, that is \[ \mathcal{S}(\mathbb{D}) = \{\psi \in H^\infty(\mathbb{D}): \|\psi\|_{\infty} := \sup_{z \in \mathbb{D}} |\psi(z)| \leq 1\}. \] The elements of $\mathcal{S}(\mathbb{D})$ are called Schur functions. The aim of this paper is to answer the following question concerning invariant subspaces of composition operators: Characterize $\varphi$, holomorphic self maps of $\mathbb{D}$, and inner functions $\theta \in H^\infty(\mathbb{D})$ such that the Beurling type invariant subspace $\theta H^2$ is an invariant subspace for $C_{\varphi}$. We prove the following result: $C_{\varphi} (\theta H^2) \subseteq \theta H^2$ if and only if \[ \frac{\theta \circ \varphi}{\theta} \in \mathcal{S}(\mathbb{D}). \] This classification also allows us to recover or improve some known results on Beurling type invariant subspaces of composition operators.
[48] arXiv:2004.00265 [pdf, other]
Title: Learning Constitutive Relations using Symmetric Positive Definite Neural Networks
Comments: 31 pages, 20 figures
Subjects: Numerical Analysis (math.NA)
We present the Cholesky-factored symmetric positive definite neural network (SPD-NN) for modeling constitutive relations in dynamical equations. Instead of directly predicting the stress, the SPD-NN trains a neural network to predict the Cholesky factor of a tangent stiffness matrix, based on which the stress is calculated in the incremental form. As a result of the special structure, SPD-NN weakly imposes convexity on the strain energy function, satisfies time consistency for path-dependent materials, and therefore improves numerical stability, especially when the SPD-NN is used in finite element simulations. Depending on the types of available data, we propose two training methods, namely direct training for strain and stress pairs and indirect training for loads and displacement pairs. We demonstrate the effectiveness of SPD-NN on hyperelastic, elasto-plastic, and multiscale fiber-reinforced plate problems from solid mechanics. The generality and robustness of the SPD-NN make it a promising tool for a wide range of constitutive modeling applications.
[49] arXiv:2004.00270 [pdf, ps, other]
Title: Anisotropic and crystalline mean curvature flow of mean-convex sets
Authors: Antonin Chambolle (CEREMADE, CMAP), Matteo Novaga
We consider a variational scheme for the anisotropic (including crystalline) mean curvature flow of sets with strictly positive anisotropic mean curvature. We show that such condition is preserved by the scheme, and we prove the strict convergence in BV of the time-integrated perimeters of the approximating evolutions, extending a recent result of De Philippis and Laux to the anisotropic setting. We also prove uniqueness of the flat flow obtained in the limit.
[50] arXiv:2004.00271 [pdf, ps, other]
Title: The fundamental group of quotients of products of some topological spaces by a finite group -- A generalization of a Theorem of Bauer-Catanese-Grunewald-Pignatelli
Authors: Rodolfo Aguilar (IF)
We provide a description of the fundamental group of the quotient of a product of topological spaces X i admitting a universal cover by a finite group G provided that there is only a finite number of path-connected components in X g i for every g $\in$ G. This generalizes previous work of Bauer-Catanese-Grunewald-Pignatelli and Dedieu-Perroni.
Subjects: Numerical Analysis (math.NA)
We prove lower bounds for the worst case error of quadrature formulas that use given sample points ${\mathcal X}_n = \{ x_1, \dots , x_n \}$. We are mainly interested in optimal point sets ${\mathcal X}_n$, but also prove lower bounds that hold for most randomly selected sets. As a tool, we use a recent result (and extensions thereof) of Vyb\'iral on the positive semi-definiteness of certain matrices related to the product theorem of Schur. The new technique also works for spaces of analytic functions where known methods based on decomposable kernels cannot be applied.
Title: A function from Stern's diatomic sequence, and its properties
Authors: Yasuhisa Yamada
We define a function by refining Stern's diatomic sequence. We name it the {\it assembly function}. It is strictly increasing continuous. The first and the second main theorems are on an action to the function. The third theorem is on differentiability of the function at rational points.
Title: Symplectic and Euclidean Planes, a unified approach to the Ramanujan conjecture
Subjects: Number Theory (math.NT)
The Ramanujan conjecture for modular forms of holomorphic type was proved by Deligne almost half a century ago: the proof, based on his earlier proof of Weil's conjectures, was an achievement of algebraic geometry. Quite recently, we proved the conjecture in the case of Maass forms. We here show that, trading the metaplectic representation for the anaplectic representation, one can find in the holomorphic case a proof absolutely parallel to that used in the Maass case. This unified treatment gives an active role to an extensive dictionary between concepts related to the symplectic and Euclidean structures of the plane.
[54] arXiv:2004.00285 [pdf, other]
Title: An action of the cactus group on shifted tableau crystals
Authors: Inês Rodrigues
Subjects: Combinatorics (math.CO)
Recently, Gillespie, Levinson and Purbhoo introduced a crystal-like structure for shifted tableaux, called the shifted tableau crystal. We introduce a shifted analogue of the crystal reflection operators, which coincides with the restriction of the shifted Sch\"utzenberger involution to any primed interval of two adjacent letters. Unlike type $A$ Young tableau crystals, these operators do not realize an action of the symmetric group on the shifted tableau crystal because braid relations do not hold. We exhibit a natural internal action of the cactus group, realized by restrictions of the shifted Sch\"utzenberger involution on primed intervals of the underlying crystal alphabet.
Title: Deferred Cesàro conull FK spaces
Subjects: Functional Analysis (math.FA)
In this paper, we study the (strongly) deferred Ces\`{a}ro conull FK-spaces and we give some characterizations. We also apply these results to summability domains.
Title: Phantom maps and fibrations
Authors: Hiroshi Kihara
Comments: 8 pages
Subjects: Algebraic Topology (math.AT)
Given pointed $CW$-complexes $X$ and $Y$, $\rmph(X, Y)$ denotes the set of homotopy classes of phantom maps from $X$ to $Y$ and $\rmsph(X, Y)$ denotes the subset of $\rmph(X, Y)$ consisting of homotopy classes of special phantom maps. In a preceding paper, we gave a sufficient condition such that $\rmph(X, Y)$ and $\rmsph(X, Y)$ have natural group structures and established a formula for calculating the groups $\rmph(X, Y)$ and $\rmsph(X, Y)$ in many cases where the groups $[X,\Omega \widehat{Y}]$ are nontrivial. In this paper, we establish a dual version of the formula, in which the target is the total space of a fibration, to calculate the groups $\rmph(X, Y)$ and $\rmsph(X, Y)$ for pairs $(X,Y)$ to which the formula or existing methods do not apply. In particular, we calculate the groups $\rmph(X,Y)$ and $\rmsph(X,Y)$ for pairs $(X,Y)$ such that $X$ is the classifying space $BG$ of a compact Lie group $G$ and $Y$ is a highly connected cover $Y' \langle n \rangle$ of a nilpotent finite complex $Y'$ or the quotient $\gbb / H$ of $\gbb = U, O$ by a compact Lie group $H$.
Title: Robust synchronization of heterogeneous robot swarms on the sphere
Synchronization on the sphere is important to certain control applications in swarm robotics. Of recent interest is the Lohe model, which generalizes the Kuramoto model from the circle to the sphere. The Lohe model is mainly studied in mathematical physics as a toy model of quantum synchronization. The model makes few assumptions, wherefore it is well-suited to represent a swarm. Previous work on this model has focused on the cases of complete and acyclic networks or the homogeneous case where all oscillator frequencies are equal. This paper concerns the case of heterogeneous oscillators connected by a non-trivial network. We show that any undesired equilibrium is exponentially unstable if the frequencies satisfy a given bound. This property can also be interpreted as a robustness result for small model perturbations of the homogeneous case with zero frequencies. As such, the Lohe model is a good choice for control applications in swarm robotics.
[58] arXiv:2004.00299 [pdf, other]
Comments: Submitted to the IEEE for possible publication
Most works on cell-free massive multiple-input multiple-output (MIMO) consider non-cooperative precoding strategies at the base stations (BSs) to avoid extensive channel state information (CSI) exchange via backhaul signaling. However, considerable performance gains can be accomplished by allowing coordination among the BSs. This paper proposes the first distributed framework for cooperative precoding design in cell-free massive MIMO (and, more generally, in joint transmission coordinated multi-point) systems that entirely eliminates the need for backhaul signaling for CSI exchange. A novel over-the-air (OTA) signaling mechanism is introduced such that each BS can obtain the same cross-term information that is traditionally exchanged among the BSs via backhaul signaling. The proposed distributed precoding design enjoys desirable flexibility and scalability properties, as the amount of OTA signaling does not scale with the number of BSs or user equipments. Numerical results show fast convergence and remarkable performance gains as compared with non-cooperative precoding design. The proposed scheme may also outperform the centralized precoding design under realistic CSI acquisition.
[59] arXiv:2004.00311 [pdf, other]
Title: Fluctuation theory in the Boltzmann--Grad limit
We develop a rigorous theory of hard-sphere dynamics in the kinetic regime, away from thermal equilibrium. In the low density limit, the empirical density obeys a law of large numbers and the dynamics is governed by the Boltzmann equation. Deviations from this behaviour are described by dynamical correlations, which can be fully characterized for short times. This provides both a fluctuating Boltzmann equation and large deviation asymptotics.
Title: The double Cayley Grassmannian
Authors: Laurent Manivel (IMT)
Subjects: Algebraic Geometry (math.AG)
We study the smooth projective symmetric variety of Picard number one that compactifies the exceptional complex Lie group G2, by describing it in terms of vector bundles on the spinor variety of Spin(14). We call it the double Cayley Grassmannian because quite remarkably, it exhibits very similar properties to those of the Cayley Grassmannian (the other symmetric variety of type G2), but doubled in the certain sense. We deduce among other things that all smooth projective symmetric varieties of Picard number one are infinitesimally rigid.
[61] arXiv:2004.00321 [pdf, other]
Subjects: Analysis of PDEs (math.AP)
We consider a model for elastic dislocations in geophysics. We model a portion of the Earth's crust as a bounded, inhomogeneous elastic body with a buried fault surface, along which slip occurs. We prove well-posedness of the resulting mixed-boundary-value-transmission problem, assuming only bounded elastic moduli. We establish uniqueness in the inverse problem of determining the fault surface and the slip from a unique measurement of the displacement on an open patch at the surface, assuming in addition that the Earth's crust is an isotropic, layered medium with Lam\'e coefficients piecewise Lipschitz on a known partition and that the fault surface is a graph with respect to an arbitrary coordinate system. These results substantially extend those of the authors in {Arch. Ration. Mech. Anal.} {\bf 263} (2020), n. 1, 71--111.
Comments: 21 pages, 3 figures
Subjects: Numerical Analysis (math.NA)
In this work, we consider a boundary value problem for nonlinear triharmonic equation. Due to the reduction of nonlinear boundary value problems to operator equation for nonlinear terms we establish the existence, uniqueness and positivity of solution. More importantly, we design an iterative method at both continuous and discrete level for numerical solution of the problem. An analysis of actual total error of the obtained discrete solution is made. Some examples demonstrate the applicability of the theoretical results on qualitative aspects and the efficiency of the iterative method.
Authors: Philippe Soulier
Subjects: Probability (math.PR)
The goal of this paper is to investigate the tools of extreme value theory originally introduced for discrete time stationary stochastic processes (time series), namely the tail process and the tail measure, in the framework of continuous time stochastic processes with paths in the space $\mathcal{D}$ of c\`adl\`ag functions indexed by $\mathbb{R}$, endowed with Skorohod's $J_1$ topology. We prove that the essential properties of these objects are preserved, with some minor (though interesting) differences arising. We first obtain structural results which provide representation for homogeneous shift-invariant measures on $\mathcal{D}$ and then study regular variation of random elements in $\mathcal{D}$. We give practical conditions and study several examples, recovering and extending known results.
Title: Multiple Lucas Dirichlet series associated to additive and Dirichlet characters
Comments: 20 pages
Subjects: Number Theory (math.NT)
In this article, we obtain the analytic continuation of the multiple shifted Lucas zeta function, multiple Lucas $L$-function associated to Dirichlet characters and additive characters. We then compute a complete list of exact singularities and residues of these functions at these poles. Further, we show the rationality of the multiple Lucas $L$-functions associated with quadratic characters at negative integer arguments.
Title: Stochastic PDEs via convex minimization
Comments: 30 pages
We prove the applicability of the Weighted Energy-Dissipation (WED) variational principle [50] to nonlinear parabolic stochastic partial differential equations in abstract form. The WED principle consists in the minimization of a parameter-dependent convex functional on entire trajectories. Its unique minimizers correspond to elliptic-in-time regularizations of the stochastic differential problem. As the regularization parameter tends to zero, solutions of the limiting problem are recovered. This in particular provides a direct approch via convex optimization to the approximation of nonlinear stochastic partial differential equations.
Title: Discrete-time Simulation of Stochastic Volterra Equations
Subjects: Numerical Analysis (math.NA); Probability (math.PR)
We study discrete-time simulation schemes for stochastic Volterra equations, namely the Euler and Milstein schemes, and the corresponding Multi-Level Monte-Carlo method. By using and adapting some results from Zhang [22], together with the Garsia-Rodemich-Rumsey lemma, we obtain the convergence rates of the Euler scheme and Milstein scheme under the supremum norm. We then apply these schemes to approximate the expectation of functionals of such Volterra equations by the (Multi-Level) Monte-Carlo method, and compute their complexity.
[67] arXiv:2004.00341 [pdf, other]
Subjects: Numerical Analysis (math.NA)
Bilayer plates are compound materials that exhibit large bending deformations when exposed to environmental changes that lead to different mechanical responses in the involved materials. In this article a new numerical method which is suitable for simulating the isometric deformation induced by a given material mismatch in a bilayer plate is discussed. A dimensionally reduced formulation of the bending energy is discretized generically in an abstract setting and specified for discrete Kirchhoff triangles; convergence towards the continuous formulation is proved. A practical semi-implicit discrete gradient flow employing a linearization of the isometry constraint is proposed as an iterative method for the minimization of the bending energy; stability and a bound on the violation of the isometry constraint are proved. The incorporation of obstacles is discussed and the practical performance of the method is illustrated with numerical experiments involving the simulation of large bending deformations and investigation of contact phenomena.
[68] arXiv:2004.00343 [pdf, other]
Comments: 29 pages, 34 figures
Evidence from experimental studies shows that oscillations due to electro-mechanical coupling can be generated spontaneously in smooth muscle cells. Such cellular dynamics are known as \textit{pacemaker dynamics}. In this article we address pacemaker dynamics associated with the interaction of $\text{Ca}^{2+}$ and $\text{K}^+$ fluxes in the cell membrane of a smooth muscle cell. First we reduce a pacemaker model to a two-dimensional system equivalent to the reduced Morris-Lecar model and then perform a detailed numerical bifurcation analysis of the reduced model. Existing bifurcation analyses of the Morris-Lecar model concentrate on external applied current whereas we focus on parameters that model the response of the cell to changes in transmural pressure. We reveal a transition between Type I and Type II excitabilities with no external current required. We also compute a two-parameter bifurcation diagram and show how the transition is explained by the bifurcation structure.
Authors: Emilio A. Lauret
Subjects: Differential Geometry (math.DG)
Let $G$ be a compact connected Lie group of dimension $m$. Once a bi-invariant metric on $G$ is fixed, left-invariant metrics on $G$ are in correspondence with $m\times m$ positive definite symmetric matrices. We estimate the diameter and the smallest positive eigenvalue of the Laplace-Beltrami operator associated to a left-invariant metric on $G$ in terms of the eigenvalues of the corresponding positive definite symmetric matrix. As a consequence, we give partial answers to a conjecture by Eldredge, Gordina and Saloff-Coste; namely, we give large subsets $\mathcal S$ of the space of left-invariant metrics $\mathcal M$ on $G$ such that there exists a positive real number $C$ depending on $G$ and $\mathcal S$ such that $\lambda_1(G,g)\operatorname{diam}(G,g)^2\leq C$ for all $g\in\mathcal S$. The existence of the constant $C$ for $\mathcal S=\mathcal M$ is the original conjecture.
Title: Large deviations for Brownian motion in evolving Riemannian manifolds
Authors: Rik Versendaal
Subjects: Probability (math.PR)
We prove large deviations for $g(t)$-Brownian motion in a complete, evolving Riemannian manifold $M$ with respect to a collection $\{g(t)\}_{t\in [0,1]}$ of Riemannian metrics, smoothly depending on $t$. We show how the large deviations are obtained from the large deviations of the (time-dependent) horizontal lift of $g(t)$-Brownian motion to the frame bundle $FM$ over $M$. The latter is proved by embedding the frame bundle into some Euclidean space and applying Freidlin-Wentzell theory for diffusions with time-dependent coefficients, where the coefficients are jointly Lipschitz in space and time.
Title: A convolution quadrature method for Maxwell's equations in dispersive media
Subjects: Numerical Analysis (math.NA)
We study the systematic numerical approximation of Maxwell's equations in dispersive media. Two discretization strategies are considered, one based on a traditional leapfrog time integration method and the other based on convolution quadrature. The two schemes are proven to be equivalent and to preserve the underlying energy-dissipation structure of the problem. The second approach, however, is independent of the number of internal states and allows to handle rather general dispersive materials. Using ideas of fast-and-oblivious convolution quadrature, the method can be implemented efficiently.
Title: On Sarnak's Density Conjecture and its Applications
Comments: 61 pages, 2 figures
Sarnak's Density Conjecture is an explicit bound on the multiplicities of non-tempered representations in a sequence of cocompact congruence arithmetic lattices in a semisimple Lie group, which is motivated by the work of Sarnak and Xue. The goal of this work is to discuss similar hypotheses, their interrelation and applications. We mainly focus on two properties - the Spectral Spherical Density Hypothesis and the Geometric Weak Injective Radius Property. Our results are strongest in the p-adic case, where we show that the two properties are equivalent, and both imply Sarnak's General Density Hypothesis. One possible application is that either the limit multiplicity property or the weak injective radius property imply Sarnak's Optimal Lifting Property. Conjecturally, all those properties should hold in great generality. We hope that this work will motivate their proofs in new cases.
Comments: 38 pages
Subjects: Analysis of PDEs (math.AP)
In this paper we deal with the Cauchy problem for the incompressible Euler equations in the three-dimensional periodic setting. We prove non-uniqueness for an $L^2$-dense set of H\"older continuous initial data in the class of H\"older continuous admissible weak solutions for all exponents below the Onsager-critical $1/3$. This improves previous results on non-uniqueness obtained by Daneri in arXiv:1302.0988 and by Daneri and Szekelyhidi Jr. in arXiv:1603.09714 and generalizes the result obtained by Buckmaster, De Lellis, Szekelyhidi Jr. and Vicol in arXiv:1701.08678.
Title: The existence phase transition for two Poisson random fractal models
Subjects: Probability (math.PR)
In this paper we study the existence phase transition of the random fractal ball model and the random fractal box model. We show that both of these are in the empty phase at the critical point of this phase transition.
[75] arXiv:2004.00397 [pdf, other]
Title: Optimal Formation of Autonomous Vehicles in Mixed Traffic Flow
Platooning of multiple autonomous vehicles has attracted significant attention in both academia and industry. Despite its great potential, platooning is not the only choice for the formation of autonomous vehicles in mixed traffic flow, where autonomous vehicles and human-driven vehicles (HDVs) coexist. In this paper, we investigate the optimal formation of autonomous vehicles that can achieve an optimal system-wide performance in mixed traffic flow. Specifically, we consider the optimal $\mathcal{H}_2$ performance of the entire traffic flow, reflecting the potential of autonomous vehicles in mitigating traffic perturbations. Then, we formulate the optimal formation problem as a set function optimization problem. Numerical results reveal two predominant optimal formations: uniform distribution and platoon formation, depending on traffic parameters. In addition, we show that 1) the prevailing platoon formation is not always the optimal choice; 2) platoon formation might be the worst choice when HDVs have a poor string stability behavior. These results suggest more opportunities for the formation of autonomous vehicles, beyond platooning, in mixed traffic flow.
[76] arXiv:2004.00398 [pdf, ps, other]
Title: Hermitian theta series and Maaß spaces under the action of the maximal discrete extension of the Hermitian modular group
Authors: Annalena Wernz
Subjects: Number Theory (math.NT)
Let $\Gamma_n(\mathcal{\scriptstyle{O}}_{\mathbb{K}})$ denote the Hermitian modular group of degree $n$ over an imaginary quadratic number field $\mathbb{K}$ and $\Delta_{n,\mathbb{K}}^*$ its maximal discrete extension in the special unitary group $SU(n,n;\mathbb{C})$. In this paper we study the action of $\Delta_{n,\mathbb{K}}^*$ on Hermitian theta series and Maass spaces. For $n=2$ we will find theta lattices such that the corresponding theta series are modular forms with respect to $\Delta_{2,\mathbb{K}}^*$ as well as examples where this is not the case. Our second focus lies on studying two different Maass spaces. We will see that the new found group $\Delta_{2,\mathbb{K}}^*$ consolidates the different definitions of the spaces.
[77] arXiv:2004.00409 [pdf, ps, other]
Title: Fekete-Szego Inequality For Analytic And Bi-univalent Functions Subordinate To (p; q)-Lucas Polynomials
Authors: Ala Amourah
Subjects: Complex Variables (math.CV)
In the present paper, a subclass of analytic and bi-univalent functions by means of (p; q)- Lucas polynomials is introduced. Certain coefficients bounds for functions belonging to this subclass are obtained. Furthermore, the Fekete-Szego problem for this subclass is solved.
[78] arXiv:2004.00414 [pdf, other]
Title: Discrete orthogonal polynomials as a tool for detection of small anomalies of time series: a case study of GPS final orbits
Subjects: Numerical Analysis (math.NA)
In this paper, we show that the classical discrete orthogonal univariate polynomials (namely, Hahn polynomials on an equidistant lattice with unit weights) of sufficiently high degrees have extremely small values near the endpoints (we call this property as "rapid decay near the endpoints of the discrete lattice". We demonstrate the importance of the proved results applying polynomial least squares approximation for the detection of anomalous values in IGS final orbits for GPS and GLONASS satellites. We propose a numerically stable method for the construction of discrete orthogonal polynomials of high degrees. It allows one to reliably construct Hahn-Chebyshev polynomials using standard accuracy (double precision, 8-byte) on thousands of points, for degrees up to several hundred. A Julia implementation of the mentioned algorithms is available at https://github.com/sptsarev/high-deg-polynomial-fitting.
These results seem to be new; their explanation in the framework of the well-known asymptotic theory of discrete orthogonal polynomials could not be found in the literature.
[79] arXiv:2004.00416 [pdf, ps, other]
Title: Multiplicity results for elliptic problems involving nonlocal integrodifferential operators without Ambrosetti-Rabinowitz condition
Comments: arXiv admin note: text overlap with arXiv:2003.13646
Subjects: Analysis of PDEs (math.AP)
In this paper, we study the existence and multiplicity of weak solutions for a general class of elliptic equations (\mathscr{P}_{\lambda}) in a smooth bounded domain, driven by a nonlocal integrodifferential operator \mathscr{L}_{\mathcal{A}K} with Dirichlet boundary conditions involving variable exponents without Ambrosetti and Rabinowitz type growth conditions. Using different versions of the Mountain Pass Theorem, as well as, the Fountain Theorem and Dual Fountain Theorem with Cerami condition, we obtain the existence of weak solutions for the problem (\mathscr{P}_{\lambda}) and we show that the problem treated has at least one nontrivial solution for any parameter \lambda>0 small enough as well as that the solution blows up, in the fractional Sobolev norm, as \lambda \to 0. Moreover, for the case sublinear, by imposing some additional hypotheses on the nonlinearity f(x,\cdot), by using a new version of the symmetric Mountain Pass Theorem due to Kajikiya [36], we obtain the existence of infinitely many weak solutions which tend to be zero, in the fractional Sobolev norm, for any parameter \lambda>0. As far as we know, the results of this paper are new in the literature.
[80] arXiv:2004.00417 [pdf, other]
Title: Distributed Nonlinear Model Predictive Control and Metric Learning for Heterogeneous Vehicle Platooning with Cut-in/Cut-out Maneuvers
Vehicle platooning has been shown to be quite fruitful in the transportation industry to enhance fuel economy, road throughput, and driving comfort. Model Predictive Control (MPC) is widely used in literature for platoon control to achieve certain objectives, such as safely reducing the distance among consecutive vehicles while following the leader vehicle. In this paper, we propose a Distributed Nonlinear MPC (DNMPC), based upon an existing approach, to control a heterogeneous dynamic platoon with unidirectional topologies, handling possible cut-in/cut-out maneuvers. The introduced method guarantees a collision-free driving experience while tracking the desired speed profile and maintaining a safe desired gap among the vehicles. The time of convergence in the dynamic platooning is derived based on the time of cut-in and/or cut-out maneuvers. In addition, we analyze the level of improvement of driving comfort, fuel economy, and absolute and relative convergence of the method by using distributed metric learning and distributed optimization with Alternating Direction Method of Multipliers (ADMM). Simulation results on a dynamic platoon with cut-in and cut-out maneuvers and with different unidirectional topologies show the effectiveness of the introduced method.
[81] arXiv:2004.00419 [pdf, ps, other]
Title: Local Algebras for Causal Fermion Systems in Minkowski Space
Comments: 49 pages, LaTeX
Subjects: Mathematical Physics (math-ph)
A notion of local algebras is introduced in the theory of causal fermion systems. Their properties are studied in the example of the regularized Dirac sea vacuum in Minkowski space. The commutation relations are worked out, and the differences to the canonical commutation relations are discussed. It is shown that the spacetime point operators associated to a Cauchy surface satisfy a time slice axiom. It is proven that the algebra generated by operators in an open set is irreducible as a consequence of Hegerfeldt's theorem. The light cone structure is recovered by analyzing expectation values of the operators in the algebra in the limit when the regularization is removed. It is shown that every spacetime point operator commutes with the algebras localized away from its null cone, up to small corrections involving the regularization length.
[82] arXiv:2004.00420 [pdf, ps, other]
Title: Gradient Flows of Higher Order Yang-Mills-Higgs Functionals
Authors: Pan Zhang
Comments: 22 pages
In this paper, we define a family of functionals generalizing the Yang-Mills-Higgs functional on a closed Riemannian manifold. Then we prove the short time existence of the corresponding gradient flow by a gauge fixing technique. The lack of maximal principle for the higher order operator brings us a lot of inconvenience during the estimates for the Higgs field. We observe that the $L^2$-bound of the Higgs field is enough for energy estimates in $4$ dimension, and we show that, provided the order of derivatives, appearing in the higher order Yang-Mills-Higgs functionals, is strictly greater than 1, solutions to the gradient flow do not hit any finite time singularities. As for the Yang-Mills-Higgs $k$-functional with Higgs self-interaction, we show that, provided $\dim(M)<2(k+1)$, the associated gradient flow admits long time existence with smooth initial data. The proof depends on local $L^2$-derivative estimates, energy estimates and blow-up analysis.
[83] arXiv:2004.00422 [pdf, other]
Title: A General Large Neighborhood Search Framework for Solving Integer Programs
This paper studies how to design abstractions of large-scale combinatorial optimization problems that can leverage existing state-of-the-art solvers in general purpose ways, and that are amenable to data-driven design. The goal is to arrive at new approaches that can reliably outperform existing solvers in wall-clock time. We focus on solving integer programs, and ground our approach in the large neighborhood search (LNS) paradigm, which iteratively chooses a subset of variables to optimize while leaving the remainder fixed. The appeal of LNS is that it can easily use any existing solver as a subroutine, and thus can inherit the benefits of carefully engineered heuristic approaches and their software implementations. We also show that one can learn a good neighborhood selector from training data. Through an extensive empirical validation, we demonstrate that our LNS framework can significantly outperform, in wall-clock time, compared to state-of-the-art commercial solvers such as Gurobi.
[84] arXiv:2004.00424 [pdf, ps, other]
Title: Solving the inverse problem for an ordinary differential equation using conjugation
Subjects: Optimization and Control (math.OC); Dynamical Systems (math.DS); Numerical Analysis (math.NA); Chaotic Dynamics (nlin.CD)
We consider the following inverse problem for an ordinary differential equations (ODE): given a set of data points $(t_i,x_i)$, $i=1,\cdots,N$, find an ODE $x^\prime(t) = v (x(t))$ that admits a solution $x(t)$ such that $x_i = x(t_i)$. To determine the field $v(x)$, we use the conjugate map defined by Schr\"{o}der equation and the solution of a related Julia equation. We also study existence, uniqueness, stability and other properties of this solution. Finally, we present several numerical methods for the approximation of the field $v(x)$ and provide some illustrative examples of the application of these methods.
[85] arXiv:2004.00429 [pdf, ps, other]
Title: Absolute concentration robustness in power law kinetic systems
Comments: submitted for publication; 26 pages. arXiv admin note: text overlap with arXiv:1908.04497
Subjects: Dynamical Systems (math.DS)
Absolute concentration robustness (ACR) is a condition wherein a species in a chemical kinetic system possesses the same value for any positive steady state the network may admit regardless of initial conditions. Thus far, results on ACR center on chemical kinetic systems with deficiency one. In this contribution, we use the idea of dynamic equivalence of chemical reaction networks to derive novel results that guarantee ACR for some classes of power law kinetic systems with deficiency zero. Furthermore, using network decomposition, we identify ACR in higher deficiency networks (i.e. deficiency $\geq$ 2) by considering the presence of a low deficiency subnetwork with ACR. Network decomposition also enabled us to recognize and define a weaker form of concentration robustness than ACR, which we named as `balanced concentration robustness'. Finally, we also discuss and emphasize our view of ACR as a primarily kinetic character rather than a condition that arises from structural sources.
[86] arXiv:2004.00435 [pdf, ps, other]
Title: Lower bounds for regular genus and gem-complexity of PL 4-manifolds with boundary
Comments: 19 pages, 4 figures
Subjects: Geometric Topology (math.GT); Combinatorics (math.CO)
Let $M$ be a compact connected PL 4-manifold with boundary. In this article, we have given several lower bounds for regular genus and gem-complexity of the manifold $M$. In particular, we have proved that if $M$ is a connected compact $4$-manifold with $h$ boundary components then its gem-complexity $\mathit{k}(M)$ satisfies the following inequalities:
$$\mathit{k}(M)\geq 3\chi(M)+7m+7h-10 \mbox{ and }\mathit{k}(M)\geq \mathit{k}(\partial M)+3\chi(M)+4m+6h-9,$$ and its regular genus $\mathcal{G}(M)$ satisfies the following inequalities:
$$\mathcal{G}(M)\geq 2\chi(M)+3m+2h-4\mbox{ and }\mathcal{G}(M)\geq \mathcal{G}(\partial M)+2\chi(M)+2m+2h-4,$$ where $m$ is the rank of the fundamental group of the manifold $M$. These lower bounds enable to strictly improve previously known estimations for regular genus and gem-complexity of a PL $4$-manifold with boundary. Further, the sharpness of these bounds has also been showed for a large class of PL $4$-manifolds with boundary.
[87] arXiv:2004.00437 [pdf, ps, other]
Title: Statistics of subgroups of the modular group
Comments: 62 pages
Subjects: Group Theory (math.GR); Combinatorics (math.CO)
We count the finitely generated subgroups of the modular group $\textsf{PSL}(2,\mathbb{Z})$. More precisely: each such subgroup $H$ can be represented by its Stallings graph $\Gamma(H)$, we consider the number of vertices of $\Gamma(H)$ to be the size of $H$ and we count the subgroups of size $n$. Since an index $n$ subgroup has size $n$, our results generalize the known results on the enumeration of the finite index subgroups of $\textsf{PSL}(2,\mathbb{Z})$. We give asymptotic equivalents for the number of finitely generated subgroups of $\textsf{PSL}(2,\mathbb{Z})$, as well as of the number of finite index subgroups, free subgroups and free finite index subgroups. We also give the expected value of the isomorphism type of a size $n$ subgroup and prove a large deviations statement concerning this value. Similar results are proved for finite index and for free subgroups. Finally, we show how to efficiently generate uniformly at random a size $n$ subgroup (resp. finite index subgroup, free subgroup) of $\textsf{PSL}(2,\mathbb{Z})$.
[88] arXiv:2004.00444 [pdf, ps, other]
Title: The Heston stochastic volatility model has a boundary trace at zero volatility
Comments: 48 pages
Subjects: Analysis of PDEs (math.AP)
We establish boundary regularity results in H\"older spaces for the degenerate parabolic problem obtained from the Heston stochastic volatility model in Mathematical Finance set up in the spatial domain (upper half-plane) $\mathbb{H} = \mathbb{R}\times (0,\infty)\subset \mathbb{R}^2$. Starting with nonsmooth initial data $u_0\in H$, we take advantage of smoothing properties of the parabolic semigroup $\mathrm{e}^{-t\mathcal{A}}\colon H\to H$, $t\in \mathbb{R}_+$, generated by the Heston model, to derive the smoothness of the solution $u(t) = \mathrm{e}^{-t\mathcal{A}} u_0$ for all $t>0$. The existence and uniqueness of a weak solution is obtained in a Hilbert space $H = L^2(\mathbb{H};\mathfrak{w})$ with very weak growth restrictions at infinity and on the boundary $\partial\mathbb{H} = \mathbb{R}\times \{ 0\}\subset \mathbb{R}^2$ of the half-plane $\mathbb{H}$. We investigate the influence of the boundary behavior of the initial data $u_0\in H$ on the boundary behavior of $u(t)$ for $t>0$.
[89] arXiv:2004.00447 [pdf, ps, other]
Title: The generalized linear period
Authors: Hengfei Lu
Comments: 14 pages
Subjects: Representation Theory (math.RT)
Let $F$ be a non-archimedean local field of characteristic zero. We study the linear period problem for the pair $(G,H_{p,p+1})=(GL_{2p+1}(F), GL_{p}(F)\times GL_{p+1}(F))$ and we prove that any bi-$H_{p,p+1}$-invariant generalized function on $G$ is invariant under the matrix transpose. We also show that any $P\cap H_{p,p+1}$-invariant linear functional on an $H_{p,p+1}$-distinguished irreducible smooth representation of $G$ is also $H_{p,p+1}$-invariant, where $P$ is a standard mirabolic subgroup of $G$ with last row vector $(0,\cdots,0,1)$.
[90] arXiv:2004.00453 [pdf, ps, other]
Title: More on $ω$-orthogonality and $ω$-parallelism
Subjects: Functional Analysis (math.FA)
We investigate some aspects of various numerical radius orthogonalities and numerical radius parallelism for bounded linear operators on a Hilbert space $\mathscr{H}$. Among several results, we show that if $T,S\in \mathbb{B}(\mathscr{H})$ and $M^*_{\omega(T)}=M^*_{\omega(S)}$, then $T\perp_{\omega B} S$ if and only if $S\perp_{\omega B} T$, where $M^*_{\omega(T)}=\{\{x_n\}:\,\,\,\|x_n\|=1, \lim_n|\langle Tx_n, x_n\rangle|=\omega(T)\}$, and $\omega(T)$ is the numerical radius of $T$ and $\perp_{\omega B}$ is the numerical radius Birkhoff orthogonality.
[91] arXiv:2004.00455 [pdf, other]
Title: A locking-free DPG scheme for Timoshenko beams
Subjects: Numerical Analysis (math.NA)
We develop a discontinuous Petrov-Galerkin scheme with optimal test functions (DPG method) for the Timoshenko beam bending model with various boundary conditions, combining clamped, supported, and free ends. Our scheme approximates the transverse deflection and bending moment. It converges quasi-optimally in $L_2$ and is locking free. In particular, it behaves well (converges quasi-optimally) in the limit case of the Euler-Bernoulli model. Several numerical results illustrate the performance of our method.
[92] arXiv:2004.00458 [pdf, ps, other]
Title: Uniform approximation of 2$d$ Navier-Stokes equation by stochastic interacting particle systems
Subjects: Probability (math.PR); Analysis of PDEs (math.AP); Functional Analysis (math.FA)
We consider an interacting particle system modeled as a system of $N$ stochastic differential equations driven by Brownian motions. We prove that the (mollified) empirical process converges, uniformly in time and space variables, to the solution of the two-dimensional Navier-Stokes equation written in vorticity form.
The proofs follow a semigroup approach.
[93] arXiv:2004.00460 [pdf, ps, other]
Title: An attempt of proof of Riemann Hypothesis
Authors: Roland Quême
Subjects: General Mathematics (math.GM)
This paper deals with an attempt of proof of the Riemann Hypothesis (RH). Let $T>10^{10}$ arbitrarily large. Let the region $\Omega_T=\Big\{z=x+i y\ \Big|\ \frac{1}{2}<x<1, \ 0<y<T\Big\}.$ There is a finite number $N_T$ of roots of $\zeta(z)$ in $\Omega_T$. The aim of the paper is to prove that $N_T=0$. Suppose that $N_T>0$. There exists at least one root $\rho=\frac{1}{2}+{\bf u}+i\gamma $ whose real part is greater or equal to the real part of all the other roots in $\Omega_T$. Let $v\geq \frac{3}{2}$. Let $\varepsilon>0$ arbitrarily small. We prove that $f(z)=\frac{\zeta'(z)}{\zeta(z)}$ is analytic in the open disk $\Omega_\varepsilon=\Big\{ \Big|z-\Big(\rho+\frac{\varepsilon}{2}+v\Big)\Big|\Big\}< v.$ Let $s=\rho+\varepsilon$. We prove, from the Taylor series of $\zeta(s)$, that $f(s)\sim \frac{1}{\varepsilon}\rightarrow \infty$ when $\varepsilon\rightarrow 0$, and that, through the representation of $f(s)$ as a Taylor series, $f(s)=f(c_0)-(v-\frac{\varepsilon}{2})f'(c_0) +\frac{(v-\frac{\varepsilon}{2})^2}{2!}f''(c_0)-\frac{(v-\frac{\varepsilon}{2})^3}{3!}f^{(3)}(c_0)+\dots\mbox{\ for\ }c_0=\rho+\frac{\varepsilon}{2}+v,$ in $\Omega_\varepsilon$, that $f(s)\not\rightarrow \infty$ when $\varepsilon\rightarrow 0$, a contradiction which allows us to prove RH.
[94] arXiv:2004.00462 [pdf, other]
Title: An extension of Calderon Transfer Principle
Authors: Sakin Demir
Subjects: Classical Analysis and ODEs (math.CA)
We first prove that the well known transfer principle of A. P. Calder\'on can be extended to the vector-valued setting and then we apply this extension to vector-valued inequalities for the Hardy-Littlewood maximal function to prove the vector-valued strong type $L^p$ norm inequalities for $1<p<\infty$ and the vector-valued weak type $(1,1)$ inequality for ergodic maximal function.
[95] arXiv:2004.00466 [pdf, ps, other]
Title: Existence of Positive Eigenfunctions to an Anisotropic Elliptic Operator via Sub-Super Solutions Method
Comments: 11 pages, references 16 titles
Subjects: Analysis of PDEs (math.AP)
Using the sub-supersolution method we study the existence of positive solutions for the anisotropic problem \begin{equation} -\sum_{i=1}^N\frac{\partial}{\partial x_i}\left( \left|\frac{\partial u}{\partial x_i}\right|^{p_i-2}\frac{\partial u}{\partial x_i}\right)=\lambda u^{q-1} \end{equation} where $\Omega$ is a bounded and regular domain of $\mathbb{R}^N$, $q>1$ and $\lambda>0$.
[96] arXiv:2004.00469 [pdf, ps, other]
Title: Laws of large numbers for weighted sums of independent random variables: a game of mass
Comments: 25 pages, 1 figure
Subjects: Probability (math.PR)
We consider weighted sums of independent random variables regulated by an increment sequence. We provide operative conditions that ensure strong law of large numbers for such sums to hold in both the centered and non-centered case. The existing criteria for the strong law are either implicit or assume some sufficient decay for the sequence of coefficients. In our set up we allow for arbitrary sequence of coefficients, possibly random, provided the random variables regulated by such increments satisfy some mild concentration conditions. In the non-centered case, convergence can be translated into the behavior of a deterministic sequence and it becomes a game of mass provided the expectation of the random variables is a function of the increments. We show how different limiting scenarios can emerge by identifying several classes of increments, for which concrete examples will be offered.
[97] arXiv:2004.00474 [pdf, ps, other]
Title: A note on Taylor expansion
Authors: Shun Tang
Comments: 8 pages
Subjects: Classical Analysis and ODEs (math.CA)
Let $f(x)$ be a real function which has $(n+1)$-th derivative on an interval $[a, b]$. For any point $x_0\in (a, b)$ and any integer $0\leq k\leq n$, denote by $S_{k,x_0}(x)$ the $k$-th truncation of the Taylor expansion of $f(x)$ at $x_0$, i.e. $$S_{k,x_0}(x)=\sum_{i=0}^k\frac{f^{(i)}(x_0)}{i!}(x-x_0)^i.$$ In this note, we consider the $L_2$-approximation of $f(x)$ by polynomials of degree $\leq k$, we show that $S_{k,x_0}(x)$ is the limit of the best approximations of $f(x)$ on $[x_0-\varepsilon, x_0+\varepsilon]$ as $\varepsilon\to 0$.
[98] arXiv:2004.00475 [pdf, ps, other]
Title: Stopping Criteria for, and Strong Convergence of, Stochastic Gradient Descent on Bottou-Curtis-Nocedal Functions
Authors: Vivak Patel
While Stochastic Gradient Descent (SGD) is a rather efficient algorithm for data-driven problems, it is an incomplete optimization algorithm as it lacks stopping criteria, which has limited its adoption in situations where such criteria are necessary. Unlike stopping criteria for deterministic methods, stopping criteria for SGD require a detailed understanding of (A) strong convergence, (B) whether the criteria will be triggered, (C) how false negatives are controlled, and (D) how false positives are controlled. In order to address these issues, we first prove strong global convergence (i.e., convergence with probability one) of SGD on a popular and general class of convex and nonconvex functions that are specified by, what we call, the Bottou-Curtis-Nocedal structure. Our proof of strong global convergence refines many techniques currently in the literature and employs new ones that are of independent interest. With strong convergence established, we then present several stopping criteria and rigorously explore whether they will be triggered in finite time and supply bounds on false negative probabilities. Ultimately, we lay a foundation for rigorously developing stopping criteria for SGD methods for a broad class of functions, in hopes of making SGD a more complete optimization algorithm with greater adoption for data-driven problems.
[99] arXiv:2004.00483 [pdf, ps, other]
Title: Perturbation theory for homogeneous evolution equations
Authors: Daniel Hauer
Comments: arXiv admin note: substantial text overlap with arXiv:1901.08691
In this paper, we develop a perturbation theory to show that if a homogeneous operator of order $\alpha\neq 1$ is perturbed by a Lipschitz continuous mapping then every mild solution of the first-order Cauchy problem governed by these operators is strong and the time-derivative satisfies a global regularity estimate. We employ this theory to derive global $L^q$-$L^{\infty}$-estimates of the time-derivative of the evolution problem governed by the $p$-Laplace-Beltrami operator and total variational flow operator respectively perturbed by a Lipschitz nonlinearity on a non-compact Riemannian manifold.
[100] arXiv:2004.00486 [pdf, ps, other]
Title: Higher rho invariant and delocalized eta invariant at infinity
Comments: 43 pages,9 figures
Subjects: K-Theory and Homology (math.KT)
In this paper, we introduce several new secondary invariants for Dirac operators on a complete Riemannian manifold with a uniform positive scalar curvature metric outside a compact set and use these secondary invariants to establish a higher index theorem for the Dirac operators. We apply our theory to study the secondary invariants for a manifold with corner with positive scalar curvature metric on each boundary face.
[101] arXiv:2004.00490 [pdf, other]
Title: Scheduling in Cellular Federated Edge Learning with Importance and Channel Awareness
Comments: Submitted to IEEE for possible publication
Subjects: Information Theory (cs.IT); Machine Learning (cs.LG); Networking and Internet Architecture (cs.NI)
In cellular federated edge learning (FEEL), multiple edge devices holding local data jointly train a learning algorithm by communicating learning updates with an access point without exchanging their data samples. With limited communication resources, it is beneficial to schedule the most informative local learning update. In this paper, a novel scheduling policy is proposed to exploit both diversity in multiuser channels and diversity in the importance of the edge devices' learning updates. First, a new probabilistic scheduling framework is developed to yield unbiased update aggregation in FEEL. The importance of a local learning update is measured by gradient divergence. If one edge device is scheduled in each communication round, the scheduling policy is derived in closed form to achieve the optimal trade-off between channel quality and update importance. The probabilistic scheduling framework is then extended to allow scheduling multiple edge devices in each communication round. Numerical results obtained using popular models and learning datasets demonstrate that the proposed scheduling policy can achieve faster model convergence and higher learning accuracy than conventional scheduling policies that only exploit a single type of diversity.
[102] arXiv:2004.00495 [pdf, ps, other]
Title: Similarity solutions and conservation laws for the Beam Equations: a complete study
Comments: 14 pages and accepted for publication by Acta Polytechnica
Subjects: Mathematical Physics (math-ph)
We study the similarity solutions and we determine the conservation laws of the various forms of beam equation, such as, Euler-Bernoulli, Rayleigh and Timoshenko-Prescott. The travelling-wave reduction leads to solvable fourth-order odes for all the forms. In addition, the reduction based on the scaling symmetry for the Euler-Bernoulli form leads to certain odes for which there exists zero symmetries. Therefore, we conduct the singularity analysis to ascertain the integrability. We study two reduced odes of order second and third. The reduced second-order ode is a perturbed form of Painlev\'e-Ince equation, which is integrable and the third-order ode falls into the category of equations studied by Chazy, Bureau and Cosgrove. Moreover, we derived the symmetries and its corresponding reductions and conservation laws for the forced form of the above mentioned beam forms. The Lie Algebra is mentioned explicitly for all the cases.
[103] arXiv:2004.00504 [pdf, ps, other]
Title: The fourth moment of Dirichlet L-functions
Authors: Xiaosheng Wu
Comments: 63 pages, any comments are welcome. arXiv admin note: text overlap with arXiv:math/0610335 by other authors
Subjects: Number Theory (math.NT)
We compute the fourth moment of Dirichlet $L$-functions averaged over primitive characters to modulus $q$ and over $t\in [0,T]$, with a power savings in the error term.
[104] arXiv:2004.00510 [pdf, ps, other]
Title: Divisible design digraphs and association schemes
Comments: 15 pages. arXiv admin note: text overlap with arXiv:1706.06281
Subjects: Combinatorics (math.CO)
Divisible design digraphs are constructed from skew balanced generalized weighing matrices and generalized Hadamard matrices. Commutative and non-commutative association schemes are shown to be attached to the constructed divisible design digraphs.
[105] arXiv:2004.00515 [pdf, other]
Title: Regularity results for nonlocal evolution Venttsel' problems
Comments: 14 pages, 1 figure. arXiv admin note: text overlap with arXiv:1702.06324
Subjects: Analysis of PDEs (math.AP)
We consider parabolic nonlocal Venttsel' problems in polygonal and piecewise smooth two-dimensional domains and study existence, uniqueness and regularity in (anisotropic) weighted Sobolev spaces of the solution.
[106] arXiv:2004.00516 [pdf, ps, other]
Title: The core growth of strongly synchronizing transducers
Comments: 12pages. arXiv admin note: substantial text overlap with arXiv:1708.07209
Subjects: Group Theory (math.GR)
We introduce the notion of `core growth rate' for strongly synchronizing transducers. We explore some elementary properties of the core growth rate and give examples of transducers with exponential core growth rate. We conjecture that all strongly synchronizing transducers which generate an automaton group of infinite order have exponential core growth rate. There is a connection to the group of automorphisms of the one-sided shift. More specifically, the results of this article are related to the question of whether or not there can exist infinite order automorphisms of the one-sided shift with infinitely many roots.
[107] arXiv:2004.00522 [pdf, ps, other]
Title: The local fundamental group of a Kawamata log terminal singularity is finite
Authors: Lukas Braun
Subjects: Algebraic Geometry (math.AG)
We prove a conjecture of Koll\'ar stating that the local fundamental group of a klt singularity $x$ is finite. In fact, we prove a stronger statement, namely that the fundamental group of the smooth locus of a neighbourhood of $x$ is finite. We call this the regional fundamental group. As the proof goes via a local-to-global induction, we simultaneously confirm finiteness of the orbifold fundamental group of the smooth locus of a weakly Fano pair.
[108] arXiv:2004.00523 [pdf, other]
Title: Tropical Lagrangian multi-sections and smoothing of locally free sheaves over degenerate Calabi-Yau surfaces
Comments: 33 pages, 4 figures, comments are welcome
In this paper, we introduce the notion of tropical Lagrangian multi-sections over any integral affine manifold $B$ with singularities, and use them to study the reconstruction problem for holomorphic vector bundles over Calabi-Yau surfaces. Given a tropical Lagrangian multi-section $\mathbb{L}$ over $B$ with prescribed local models around the ramification points, we construct a locally free sheave $\mathcal{E}_0(\mathbb{L})$ over the projective scheme $\check{X}_0(\check{B},\check{\mathscr{P}},\check{s})$ associated to the discrete Legendre transform $(\check{B},\check{\mathscr{P}},\check{\varphi})$ of $(B,\mathscr{P},\varphi)$, and prove that the pair $(\check{X}_0(\check{B},\check{\mathscr{P}},\check{s}),\mathcal{E}_0(\mathbb{L}))$ is smoothable under a combinatorial assumption on $\mathbb{L}$.
[109] arXiv:2004.00525 [pdf, ps, other]
Title: Online distributed algorithms for seeking generalized Nash equilibria in dynamic environments
Comments: 16 pages, 4 figures
Subjects: Optimization and Control (math.OC)
In this paper, we study the distributed generalized Nash equilibrium seeking problem of non-cooperative games in dynamic environments. Each player in the game aims to minimize its own time-varying cost function subject to a local action set. The action sets of all players are coupled through a shared convex inequality constraint. Each player can only have access to its own cost function, its own set constraint and a local block of the inequality constraint, and can only communicate with its neighbours via a connected graph. Moreover, players do not have prior knowledge of their future cost functions. To address this problem, an online distributed algorithm is proposed based on consensus algorithms and a primal-dual strategy. Performance of the algorithm is measured by using dynamic regrets. Under mild assumptions on graphs and cost functions, we prove that if the deviation of variational generalized Nash equilibrium sequence increases within a certain rate, then the regrets, as well as the violation of inequality constraint, grow sublinearly. A simulation is presented to demonstrate the effectiveness of our theoretical results.
[110] arXiv:2004.00529 [pdf, ps, other]
Title: Existence theory and qualitative analysis for a fully cross-diffusive predator-prey system
Comments: 60 pages
Subjects: Analysis of PDEs (math.AP)
This manuscript considers a Neumann initial-boundary value problem for the predator-prey system $$
\left\{ \begin{array}{l}
u_t = D_1 u_{xx} - \chi_1 (uv_x)_x + u(\lambda_1-u+a_1 v), \\[1mm]
v_t = D_2 v_{xx} + \chi_2 (vu_x)_x + v(\lambda_2-v-a_2 u),
\end{array} \right.
\qquad \qquad (\star) $$ in an open bounded interval $\Omega$ as the spatial domain, where for $i\in\{1,2\}$ the parameters $D_i, a_i, \lambda_i$ and $\chi_i$ are positive.
Due to the simultaneous appearance of two mutually interacting taxis-type cross-diffusive mechanisms, one of which even being attractive, it seems unclear how far a solution theory can be built upon classical results on parabolic evolution problems. In order to nevertheless create an analytical setup capable of providing global existence results as well as detailed information on qualitative behavior, this work pursues a strategy via parabolic regularization, in the course of which ($\star$) is approximated by means of certain fourth-order problems involving degenerate diffusion operators of thin film type.
During the design thereof, a major challenge is related to the ambition to retain consistency with some fundamental entropy-like structures formally associated with ($\star$); in particular, this will motivate the construction of an approximation scheme including two free parameters which will finally be fixed in different ways, depending on the size of $\lambda_2$ relative to $a_2 \lambda_1$.
[111] arXiv:2004.00532 [pdf, ps, other]
Title: Deformation theory of deformed Hermitian Yang-Mills connections and deformed Donaldson-Thomas connections
Comments: 56 pages, 1 figure, 4 tables
Subjects: Differential Geometry (math.DG)
A deformed Hermitian Yang-Mills (dHYM) connection and a deformed Donaldson-Thomas (dDT) connection are Hermitian connections on a Hermitian vector bundle $L$ over a K\"ahler manifold and a $G_2$-manifold, which are believed to correspond to a special Lagrangian and a (co)associative cycle via mirror symmetry, respectively. In this paper, when $L$ is a line bundle, introducing a new balanced Hermitian structure from the initial Hermitian structure and a dHYM connection and a new coclosed $G_2$-structure from the initial $G_2$-structure and a dDT connection, we show that their deformations are controlled by a subcomplex of the canonical complex introduced by Reyes Carri\'on. The expected dimension is given by the first Betti number of a base manifold for both cases. In the case of dHYM connections, we show that there are no obstructions for their deformations, and hence, the moduli space is always a smooth manifold. As an application of this, we give another proof of the triviality of the deformations of dHYM metrics proved by Jacob and Yau. In the case of dDT connections, we show that the moduli space is smooth if we perturb the initial $G_2$-structure generically.
[112] arXiv:2004.00533 [pdf, other]
Title: Subgraphs of large connectivity and chromatic number
Comments: 6 pages
Subjects: Combinatorics (math.CO)
Resolving a problem raised by Norin, we show that for each $k \in \mathbb{N}$, there exists $f(k) \le 7k$ such that every graph $G$ with chromatic number at least $f(k)+1$ contains a subgraph $H$ with both connectivity and chromatic number at least $k$. This result, which is best-possible up to multiplicative constants, sharpens earlier results of Alon, Kleitman, Thomassen, Saks and Seymour from 1987, who showed that $f(k) = O(k^3)$, and of Chudnovsky, Penev, Scott and Trotignon from 2013, who showed that $f(k) = O(k^2)$.
[113] arXiv:2004.00535 [pdf, other]
Title: A classification of the dynamics of three-dimensional stochastic ecological systems
Comments: 39 pages, 1 figure
Subjects: Probability (math.PR); Dynamical Systems (math.DS); Populations and Evolution (q-bio.PE)
The classification of the long-term behavior of dynamical systems is a fundamental problem in mathematics. For both deterministic and stochastic dynamics specific classes of models verify Palis' conjecture: the long-term behavior is determined by a finite number of stationary distributions. In this paper we consider the classification problem for stochastic models of interacting species. For a large class of three-species, stochastic differential equation models, we prove a variant of Palis' conjecture: the long-term statistical behavior is determined by a finite number of stationary distributions and, generically, three general types of behavior are possible: 1) convergence to a unique stationary distribution that supports all species, 2) convergence to one of a finite number of stationary distributions supporting two or fewer species, 3) convergence to convex combinations of single species, stationary distributions due to a rock-paper-scissors type of dynamic. Moreover, we prove that the classification reduces to computing Lyapunov exponents (external Lyapunov exponents) that correspond to the average per-capita growth rate of species when rare. Our results stand in contrast to the deterministic setting where the classification is incomplete even for three-dimensional, competitive Lotka--Volterra systems. For these SDE models, our results also provide a rigorous foundation for ecology's modern coexistence theory (MCT) which assumes the external Lyapunov exponents determine long-term ecological outcomes.
[114] arXiv:2004.00538 [pdf, ps, other]
Title: Throughput and Delay Optimality of Power-of-d Choices in Inhomogeneous Load Balancing Systems
Subjects: Probability (math.PR)
Load balancing problems arise in a number of systems including large scale data centers. Power-of-d choices algorithm is a popular routing algorithm, where d queues are sampled uniformly at random and the new arrivals are sent to the shortest among them. Its popularity is due to its simplicity and the need for only a small communication overhead to exchange queue lengths. If the servers are identical, it is well known that power-of-d choices routing maximizes throughput and minimizes delay in the heavy-traffic regime. However, if the servers are not identical, power-of-d choices is not throughput optimal in general. In this paper, we find necessary and sufficient conditions for throughput optimality of power-of-d choices when the servers are inhomogeneous. We also prove that under almost the same conditions, power-of-d choices is heavy-traffic optimal.
[115] arXiv:2004.00546 [pdf, other]
Title: A parallel-in-time approach for accelerating direct-adjoint studies
Comments: 30 pages, 9 figures
Subjects: Optimization and Control (math.OC); Computational Physics (physics.comp-ph)
Parallel-in-time methods are developed to accelerate the direct-adjoint looping procedure. Particularly, we utilize the Paraexp algorithm, previously developed to integrate equations forward in time, to accelerate the direct-adjoint looping that arises from gradient-based optimization. We consider both linear and non-linear governing equations and exploit the linear, time-varying nature of the adjoint equations. Gains in efficiency are seen across all cases, showing that a parallel-in-time approach is feasible for the acceleration of direct-adjoint studies. This signifies a possible approach to further increase the run-time performance for optimization studies that either cannot be parallelized in space or are at their limit of efficiency gains for a parallel-in-space approach.
[116] arXiv:2004.00548 [pdf, other]
Title: A space-time certified reduced basis method for quasilinear parabolic partial differential equations
Subjects: Numerical Analysis (math.NA)
In this paper, we propose a certified reduced basis (RB) method for quasilinear parabolic problems. The method is based on a space-time variational formulation. We provide a residual-based a-posteriori error bound on a space-time level and the corresponding efficiently computable estimator for the certification of the method. We use the Empirical Interpolation method (EIM) to guarantee the efficient offline-online computational procedure. The error of the EIM method is then rigorously incorporated into the certification procedure. The Petrov-Galerkin finite element discretization allows to benefit from the Crank-Nicolson interpretation of the discrete problem and to use a POD-Greedy approach to construct the reduced-basis spaces of small dimensions. It computes the reduced basis solution in a time-marching framework while the RB approximation error in a space-time norm is controlled by the estimator. Therefore the proposed method incorporates a POD-Greedy approximation into a space-time certification.
[117] arXiv:2004.00549 [pdf, ps, other]
Title: Inverse problems for fractional semilinear elliptic equations
Comments: 21 pages
Subjects: Analysis of PDEs (math.AP)
This paper is concerned with the forward and inverse problems for the fractional semilinear elliptic equation $(-\Delta)^s u +a(x,u)=0$ for $0<s<1$. For the forward problem, we proved the problem is well-posed and has a unique solution for small exterior data. The inverse problems we consider here consists of two cases. First we demonstrate that an unknown coefficient $a(x,u)$ can be uniquely determined from the knowledge of exterior measurements, known as the Dirichlet-to-Neumann map. Second, despite the presence of an unknown obstacle in the media, we show that the obstacle and the coefficient can be recovered concurrently from these measurements. Finally, we investigate that these two fractional inverse problems can also be solved by using a single measurement, and all results hold for any dimension $n\geq 1$.
[118] arXiv:2004.00551 [pdf, ps, other]
Title: Spectral invariants for finite dimensional Lie algebras
Subjects: Representation Theory (math.RT)
For a Lie algebra ${\mathcal L}$ with basis $\{x_1,x_2,\cdots,x_n\}$, its associated characteristic polynomial $Q_{{\mathcal L}}(z)$ is the determinant of the linear pencil $z_0I+z_1\text{ad} x_1+\cdots +z_n\text{ad} x_n.$ This paper shows that $Q_{\mathcal L}$ is invariant under the automorphism group $\text{Aut}({\mathcal L}).$ The zero variety and factorization of $Q_{\mathcal L}$ reflect the structure of ${\mathcal L}$. In the case ${\mathcal L}$ is solvable $Q_{\mathcal L}$ is known to be a product of linear factors. This fact gives rise to the definition of spectral matrix and the Poincar\'{e} polynomial for solvable Lie algebras. Application is given to $1$-dimensional extensions of nilpotent Lie algebras.
[119] arXiv:2004.00555 [pdf, ps, other]
Title: Level raising and Diagonal cycles on triple product of Shimura curves: Ramified case
Authors: Haining Wang
In this article we study the diagonal cycle on a triple product of Shimura curves at places of bad reduction. We relate the image of the diagonal cycle under the Abel-Jacobi map to certain period integral that governs the central critical value of the Garrett-Rankin type triple product L-function via level raising congruences. This formula we proved should provide a major step toward the rank 0 case of the Bloch-Kato conjecture for the triple tensor product motive of weight 2 modular forms.
[120] arXiv:2004.00556 [pdf, other]
Title: Sampling based approximation of linear functionals in Reproducing Kernel Hilbert Spaces
Subjects: Numerical Analysis (math.NA)
In this paper we analyze a greedy procedure to approximate a linear functional defined in a Reproducing Kernel Hilbert Space by nodal values. This procedure computes a quadrature rule which can be applied to general functionals, including integration functionals. For a large class of functionals, we prove convergence results for the approximation by means of uniform and greedy points which generalize in various ways several known results. A perturbation analysis of the weights and node computation is also discussed. Beyond the theoretical investigations, we demonstrate numerically that our algorithm is effective in treating various integration densities, and that it is even very competitive when compared to existing methods for Uncertainty Quantification.
[121] arXiv:2004.00572 [pdf, ps, other]
Title: A moperadic approach to cyclotomic associators
Comments: 36 pages. Several pictures. Comment are welcome
This is a companion paper to "Ellipsitomic associators". We provide a (m)operadic description of Enriquez's torsor of cyclotomic associators, as well as of its associated cyclotomic Grothendieck-Teichm\"uller groups.
[122] arXiv:2004.00576 [pdf, ps, other]
Title: Commutators of relative and unrelative elementary unitary groups
Comments: 40 pages
Subjects: Rings and Algebras (math.RA)
In the present paper we find generators of the mixed commutator subgroups of relative elementary groups and obtain unrelativised versions of commutator formulas in the setting of Bak's unitary groups. It is a direct sequel of our similar results were obtained for $GL(n,R)$ and for Chevalley groups over a commutative ring with 1, respectively. Namely, let $(A,\Lambda)$ be any form ring and $n\ge 3$. We consider Bak's hyperbolic unitary group $GU(2n,A,\Lambda)$. Further, let $(I,\Gamma)$ be a form ideal of $(A,\Lambda)$. One can associate with $(I,\Gamma)$ the corresponding elementary subgroup $FU(2n,I,\Gamma)$ and the relative elementary subgroup $EU(2n,I,\Gamma)$ of $GU(2n,A,\Lambda)$. Let $(J,\Delta)$ be another form ideal of $(A,\Lambda)$. In the present paper we prove an unexpected result that the non-obvious type of generators for $\big[EU(2n,I,\Gamma),EU(2n,J,\Delta)\big]$, as constructed in our previous papers with Hazrat, are redundant and can be expressed as products of the obvious generators, the elementary conjugates $Z_{ij}(ab,c)=T_{ji}(c)T_{ij}(ab)T_{ji}(-c)$ and $Z_{ij}(ba,c)$, and the elementary commutators $Y_{ij}(a,b)=[T_{ji}(a),T_{ij}(b)]$, where $a\in(I,\Gamma)$, $b\in(J,\Delta)$, $c\in(A,\Lambda)$. It follows that $\big[FU(2n,I,\Gamma),FU(2n,J,\Delta)\big]= \big[EU(2n,I,\Gamma),EU(2n,J,\Delta)\big]$. In fact, we establish much more precise generation results. In particular, even the elementary commutators $Y_{ij}(a,b)$ should be taken for one long root position and one short root position. Moreover, $Y_{ij}(a,b)$ are central modulo $EU(2n,(I,\Gamma)\circ(J,\Delta))$ and behave as symbols. This allows us to generalise and unify many previous results,including the multiple elementary commutator formula, and dramatically simplify their proofs.
[123] arXiv:2004.00578 [pdf, ps, other]
Title: Sign changes of Fourier coefficients of cusp forms of half-integral weight over split and inert primes in quadratic number fields
Authors: Zilong He, Ben Kane
Subjects: Number Theory (math.NT)
In this paper, we investigate sign changes of Fourier coefficients of half-integral weight cusp forms. In a fixed square class $t\mathbb{Z}^2$, we investigate the sign changes in the $tp^2$-th coefficient as $p$ runs through the split or inert primes over the ring of integers in a quadratic extension of the rationals. We show that sign changes occur in both sets of primes when there exists a prime dividing the discriminant of the field which does not divide the level of the cusp form and find an explicit condition that determines whether sign changes occur when every prime dividing the discriminant also divides the level.
[124] arXiv:2004.00586 [pdf, ps, other]
Title: Arithmeticity of groups $\mathbb Z^n\rtimes\mathbb Z$
Authors: Bena Tshishiku
Comments: 22 pages. Comments welcome
Subjects: Group Theory (math.GR)
We study when the group $\mathbb Z^n\rtimes_A\mathbb Z$ is arithmetic where $A\in GL_n(\mathbb Z)$ is hyperbolic and semisimple. We begin by giving a characterization of arithmeticity phrased in the language of algebraic tori, building on work of Grunewald-Platonov. We use this to prove several more concrete results that relate the arithmeticity of $\mathbb Z^n\rtimes_A\mathbb Z$ to the reducibility properties of the characteristic polynomial of $A$. Our tools include algebraic tori, representation theory of finite groups, Galois theory, and the inverse Galois problem.
[125] arXiv:2004.00590 [pdf, ps, other]
Title: Strong solution to stochastic penalised nematic liquid crystals model driven by multiplicative Gaussian noise
Comments: arXiv:1310.8641 has been divided into two parts, this submission is its second part (The first part is published here: this https URL)
Subjects: Analysis of PDEs (math.AP)
In this paper, we prove the existence of a unique maximal local strong solutions to a stochastic system for both 2D and 3D penalised nematic liquid crystals driven by multiplicative Gaussian noise. In the 2D case, we show that this solution is global.As a by-product of our investigation, but of independent interest, we present a general method based on fixed point arguments to establish the existence and uniqueness of a maximal local solution of an abstract stochastic evolution equations with coefficients satisfying local Lipschitz condition involving the norms of two different Banach spaces.
[126] arXiv:2004.00591 [pdf, other]
Title: Duality theorems for stars and combs IV: Undominating stars
Comments: 16 pages, 2 figures
Subjects: Combinatorics (math.CO)
In a series of four papers we determine structures whose existence is dual, in the sense of complementary, to the existence of stars or combs. In the first paper of our series we determined structures that are complementary to arbitrary stars or combs. Stars and combs can be combined, positively as well as negatively. In the second and third paper of our series we provided duality theorems for all but one of the possible combinations.
In this fourth and final paper of our series, we complete our solution to the problem of finding complementary structures for stars, combs, and their combinations, by presenting duality theorems for the missing piece: for undominating stars. Our duality theorems are phrased in terms of end-compactified subgraphs, tree-decompositions and tangle-distinguishing separators.
[127] arXiv:2004.00592 [pdf, other]
Title: Duality theorems for stars and combs III: Undominated combs
Comments: 20 pages, 2 figures
Subjects: Combinatorics (math.CO)
In a series of four papers we determine structures whose existence is dual, in the sense of complementary, to the existence of stars or combs. Here, in the third paper of the series, we present duality theorems for a combination of stars and combs: undominated combs. We describe their complementary structures in terms of rayless trees and of tree-decompositions.
Applications include a complete characterisation, in terms of normal spanning trees, of the graphs whose rays are dominated but which have no rayless spanning tree. Only two such graphs had so far been constructed, by Seymour and Thomas and by Thomassen. As a corollary, we show that graphs with a normal spanning tree have a rayless spanning tree if and only if all their rays are dominated.
Another application settles a problem left unsolved by Carmesin: The graphs whose undominated ends are reflected by a suitable spanning tree can be characterised in terms of normal spanning trees. In particular, we show that every graph that has a normal spanning tree does have a spanning tree reflecting its undominated ends.
[128] arXiv:2004.00593 [pdf, other]
Title: Duality theorems for stars and combs II: Dominating stars and dominated combs
Comments: 22 pages, 7 figures
Subjects: Combinatorics (math.CO)
In a series of four papers we determine structures whose existence is dual, in the sense of complementary, to the existence of stars or combs. Here, in the second paper of the series, we present duality theorems for combinations of stars and combs: dominating stars and dominated combs. As dominating stars exist if and only if dominated combs do, the structures complementary to them coincide. Like for arbitrary stars and combs, our duality theorems for dominated combs (and dominating stars) are phrased in terms of normal trees or tree-decompositions.
The complementary structures we provide for dominated combs unify those for stars and combs and allow us to derive our duality theorems for stars and combs from those for dominated combs. This is surprising given that our complementary structures for stars and combs are quite different: those for stars are locally finite whereas those for combs are rayless.
[129] arXiv:2004.00594 [pdf, other]
Title: Duality theorems for stars and combs I: Arbitrary stars and combs
Comments: 28 pages, 1 figure
Subjects: Combinatorics (math.CO)
Extending the well-known star-comb lemma for infinite graphs, we characterise the graphs that do not contain an infinite comb or an infinite star, respectively, attached to a given set of vertices. We offer several characterisations: in terms of normal trees, tree-decompositions, ranks of rayless graphs and tangle-distinguishing separators.
[130] arXiv:2004.00604 [pdf, ps, other]
Title: Functorially finite hearts, simple-minded systems in negative cluster categories, and noncrossing partitions
Comments: 29 pages
Subjects: Representation Theory (math.RT); Combinatorics (math.CO)
Let $Q$ be an acyclic quiver and $w \geq 1$ be an integer. Let $\mathsf{C}_{-w} (\mathbf{k} Q)$ be the $(-w)$-cluster category of $\mathbf{k} Q$. We show that there is a bijection between simple-minded collections in $\mathsf{D}^b (\mathbf{k} Q)$ lying in a fundamental domain of $\mathsf{C}_{-w} (\mathbf{k} Q)$ and $w$-simple-minded systems in $\mathsf{C}_{-w} (\mathbf{k} Q)$. This generalises the same result of Iyama-Jin in the case that $Q$ is Dynkin. A key step in our proof is the observation that the heart $\mathsf{H}$ of a bounded t-structure in a triangulated category $\mathsf{D}$ is functorially finite in $\mathsf{D}$ if and only if $\mathsf{H}$ has enough injectives and enough projectives. We then establish a bijection between $w$-simple-minded systems in $\mathsf{C}_{-w} (\mathbf{k} Q)$ and positive $w$-noncrossing partitions of the corresponding Weyl group $W_Q$.
[131] arXiv:2004.00606 [pdf, ps, other]
Title: Tipsy cop and drunken robber: a variant of the cop and robber game on graphs
Comments: 18 pages
Subjects: Combinatorics (math.CO); Discrete Mathematics (cs.DM)
Motivated by a biological scenario illustrated in the YouTube video \url{ https://www.youtube.com/watch?v=Z_mXDvZQ6dU} where a neutrophil chases a bacteria cell moving in random directions, we present a variant of the cop and robber game on graphs called the tipsy cop and drunken robber game. In this game, we place a tipsy cop and a drunken robber at different vertices of a finite connected graph $G$. The game consists of independent moves where the robber begins the game by moving to an adjacent vertex from where he began, this is then followed by the cop moving to an adjacent vertex from where she began. Since the robber is inebriated, he takes random walks on the graph, while the cop being tipsy means that her movements are sometimes random and sometimes intentional. Our main results give formulas for the probability that the robber is still free from capture after $m$ moves of this game on highly symmetric graphs, such as the complete graphs, complete bipartite graphs, and cycle graphs. We also give the expected encounter time between the cop and robber for these families of graphs. We end the manuscript by presenting a general method for computing such probabilities and also detail a variety of directions for future research.
[132] arXiv:2004.00608 [pdf, ps, other]
Title: On the characterization of constant functions through nonlocal functionals
Comments: 6 pages
Subjects: Functional Analysis (math.FA)
We provide a counterexample to an open question concerning a characterization of constant functions through double integrals that involve different quotients. This counterexample requires the construction of an unbounded function whose difference quotients avoid a sequence of intervals with endpoints that diverge to infinity.
[133] arXiv:2004.00611 [pdf, other]
Title: Spectral Edge in Sparse Random Graphs: Upper and Lower Tail Large Deviations
Comments: 36 pages, 1 figure
Subjects: Probability (math.PR); Discrete Mathematics (cs.DM); Mathematical Physics (math-ph); Combinatorics (math.CO)
In this paper we consider the problem of estimating the joint upper and lower tail large deviations of the edge eigenvalues of an Erd\H{o}s-R\'enyi random graph $\mathcal{G}_{n,p}$, in the regime of $p$ where the edge of the spectrum is no longer governed by global observables, such as the number of edges, but rather by localized statistics, such as high degree vertices. Going beyond the recent developments in mean-field approximations of related problems, this paper provides a comprehensive treatment of the large deviations of the spectral edge in this entire regime, which notably includes the well studied case of constant average degree. In particular, for $r \geq 1$ fixed, we pin down the asymptotic probability that the top $r$ eigenvalues are jointly greater/less than their typical values by multiplicative factors bigger/smaller than $1$, in the regime mentioned above. The proof for the upper tail relies on a novel structure theorem, obtained by building on estimates of Krivelevich and Sudakov (2003), followed by an iterative cycle removal process, which shows, conditional on the upper tail large deviation event, with high probability the graph admits a decomposition in to a disjoint union of stars and a spectrally negligible part. On the other hand, the key ingredient in the proof of the lower tail is a Ramsey-type result which shows that if the $K$-th largest degree of a graph is not atypically small (for some large $K$ depending on $r$), then either the top eigenvalue or the $r$-th largest eigenvalue is larger than that allowed by the lower tail event on the top $r$ eigenvalues, thus forcing a contradiction. The above arguments reduce the problems to developing a large deviation theory for the extremal degrees which could be of independent interest.
[134] arXiv:2004.00612 [pdf, ps, other]
Title: The Diophantine problem for rings of exponential polynomials
Subjects: Number Theory (math.NT); Logic (math.LO)
We prove unsolvability of the analogue of Hilbert's Tenth Problem for rings of exponential polynomials. The technique of proof consists of an interaction between Arithmetic, Analysis, Logic, and Functional Transcendence.
Cross-lists for Thu, 2 Apr 20
[135] arXiv:2004.00044 (cross-list from physics.soc-ph) [pdf, other]
Title: Strong correlations between power-law growth of COVID-19 in four continents and the inefficiency of soft quarantine strategies
Comments: 10 pages, 4 figures
Subjects: Physics and Society (physics.soc-ph); Dynamical Systems (math.DS); Biological Physics (physics.bio-ph); Populations and Evolution (q-bio.PE)
In this work we analyse the growth of the cumulative number of confirmed infected cases by the COVID-19 until March 27th, 2020, from countries of Asia, Europe, North and South America. Our results show (i) that power-law growth is observed for all countries; (ii) that the distance correlation of the power-law curves between countries are statistically highly correlated, showing the universality of such curves around the World; and (iii) that soft quarantine strategies are inefficient to flatten the growth curves. Furthermore, we present a model and strategies which allow the government to reach the flattening in the power-law curves. We found that, besides the social isolation of individuals, of well known relevance, the strategy of identifying and isolating infected individuals could be of more relevance to flatten the power-laws. These are essentially the strategies used in the Republic of Korea. In fact, our results suggest that the allowed balance between social isolation and degrees of isolation of infected individuals can be used to prevent economic catastrophes. The high correlation between the power-law curves of different countries strongly suggest that the government containment measures can be applied with success around the whole World. These measures must be scathing and applied as soon as possible.
[136] arXiv:2004.00055 (cross-list from cs.SC) [pdf, other]
Title: Explosive Proofs of Mathematical Truths
Comments: 16 pages, 5 figures. Comments solicited
Subjects: Symbolic Computation (cs.SC); Artificial Intelligence (cs.AI); History and Overview (math.HO); Physics and Society (physics.soc-ph); Neurons and Cognition (q-bio.NC)
Mathematical proofs are both paradigms of certainty and some of the most explicitly-justified arguments that we have in the cultural record. Their very explicitness, however, leads to a paradox, because their probability of error grows exponentially as the argument expands. Here we show that under a cognitively-plausible belief formation mechanism that combines deductive and abductive reasoning, mathematical arguments can undergo what we call an epistemic phase transition: a dramatic and rapidly-propagating jump from uncertainty to near-complete confidence at reasonable levels of claim-to-claim error rates. To show this, we analyze an unusual dataset of forty-eight machine-aided proofs from the formalized reasoning system Coq, including major theorems ranging from ancient to 21st Century mathematics, along with four hand-constructed cases from Euclid, Apollonius, Spinoza, and Andrew Wiles. Our results bear both on recent work in the history and philosophy of mathematics, and on a question, basic to cognitive science, of how we form beliefs, and justify them to others.
[137] arXiv:2004.00056 (cross-list from q-bio.PE) [pdf, other]
Title: COVID-19 Outbreak in Pakistan: Model-Driven Impact Analysis and Guidelines
Comments: 12 pages, 6 figures
Subjects: Populations and Evolution (q-bio.PE); Optimization and Control (math.OC)
Motivated by the rapid spread of COVID-19 all across the globe, we have performed simulations of a system dynamic epidemic spread model in different possible situations. The simulation, not only captures the model dynamic of the spread of the virus, but also, takes care of population and mobility data. The model is calibrated based on epidemic data and events specifically of Pakistan, which can easily be generalized. The simulation results are quite disturbing, indicating that, even with stringent social distancing and testing strategies and for a quite long time (even beyond one year), the spread would be significant (in tens of thousands). The real alarm is when some of these measures got leaked for a short time within this duration, which may result in catastrophic situation when millions of people would be infected.
[138] arXiv:2004.00062 (cross-list from q-bio.PE) [pdf, other]
Title: On the comparison of incompatibility of split systems across different taxa sizes
Comments: 9 pages, 2 figures
The concept of $k$-compatibility measures how many phylogenetic trees it would take to display all splits in a given set. A set of trees that display every single possible split is termed a \textit{universal tree set}. In this note, we find $A(n)$, the minimal size of a universal tree set for $n$ taxa. By normalising the $k$-compatibility using $A(n)$, one can then compare incompatibility of split systems across different taxa sizes. We demonstrate this application by comparing two SplitsTree networks of different sizes derived from archaeal genomes.
[139] arXiv:2004.00086 (cross-list from gr-qc) [pdf, other]
Title: On Schwarzschild causality in higher dimensions
Comments: 32 pages, lots of figures
We show that the causal properties of asymptotically flat spacetimes depend on their dimensionality: while the time-like future of any point in the past conformal infinity $\mathcal{I}^-$ contains the whole of the future conformal infinity $\mathcal{I}^+$ in $(2+1)$ and $(3+1)$ dimensional Schwarzchild spacetimes, this property (which we call the Penrose property) does not hold for $(d+1)$ dimensional Schwarzchild if $d>3$. We also show that the Penrose property holds for the Kerr solution in $(3+1)$ dimensions, and discuss the connection with scattering theory in the presence of positive mass.
[140] arXiv:2004.00179 (cross-list from cs.LG) [pdf, other]
Title: Fully-Corrective Gradient Boosting with Squared Hinge: Fast Learning Rates and Early Stopping
Comments: 14 pages
Boosting is a well-known method for improving the accuracy of weak learners in machine learning. However, its theoretical generalization guarantee is missing in literature. In this paper, we propose an efficient boosting method with theoretical generalization guarantees for binary classification. Three key ingredients of the proposed boosting method are: a) the \textit{fully-corrective greedy} (FCG) update in the boosting procedure, b) a differentiable \textit{squared hinge} (also called \textit{truncated quadratic}) function as the loss function, and c) an efficient alternating direction method of multipliers (ADMM) algorithm for the associated FCG optimization. The used squared hinge loss not only inherits the robustness of the well-known hinge loss for classification with outliers, but also brings some benefits for computational implementation and theoretical justification. Under some sparseness assumption, we derive a fast learning rate of the order ${\cal O}((m/\log m)^{-1/4})$ for the proposed boosting method, which can be further improved to ${\cal O}((m/\log m)^{-1/2})$ if certain additional noise assumption is imposed, where $m$ is the size of sample set. Both derived learning rates are the best ones among the existing generalization results of boosting-type methods for classification. Moreover, an efficient early stopping scheme is provided for the proposed method. A series of toy simulations and real data experiments are conducted to verify the developed theories and demonstrate the effectiveness of the proposed method.
[141] arXiv:2004.00194 (cross-list from eess.SY) [pdf, ps, other]
Title: Stabilization of Itô Stochastic T-S Models via Line Integral and Novel Estimate for Hessian Matrices
This paper proposes a line integral Lyapunov function approach to stability analysis and stabilization for It\^o stochastic T-S models. Unlike the deterministic case, stability analysis of this model needs the information of Hessian matrix of the line integral Lyapunov function which is related to partial derivatives of the basis functions. By introducing a new method to handle these partial derivatives and using the property of state-dependent matrix with rank one, the stability conditions of the underlying system can be established via a line integral Lyapunov function. These conditions obtained are more general than the ones which are based on quadratic Lyapunov functions. Based on the stability conditions, a controller is developed by cone complementarity linerization algorithm. A non-quadratic Lyapunov function approach is thus proposed for the stabilization problem of the It\^o stochastic T-S models. It has been shown that the problem can be solved by optimizing sum of traces for a group of products of matrix variables with linear constraints. Numerical examples are given to illustrate the effectiveness of the proposed control scheme.
[142] arXiv:2004.00260 (cross-list from nlin.SI) [pdf, other]
Title: A Generalised Sextic Freud Weight
Comments: 29 pages, 6 figures
We discuss the recurrence coefficients of orthogonal polynomials with respect to a generalised sextic Freud weight \[\omega(x;t,\lambda)=|x|^{2\lambda+1}\exp\left(-x^6+tx^2\right),\qquad x\in\mathbb{R},\] with parameters $\lambda>-1$ and $t\in\mathbb{R}$. We show that the coefficients in these recurrence relations can be expressed in terms of Wronskians of generalised hypergeometric functions ${}_1F_2(a_1;b_1,b_2;z)$. We derive a nonlinear discrete as well as a system of differential equations satisfied by the recurrence coefficients and use these to investigate their asymptotic behaviour. We also study properties of the generalised sextic Freud polynomials and their zeros. We conclude by highlighting a fascinating connection between generalised quartic, sextic, octic and decic Freud weights when expressing their first moments in terms of generalised hypergeometric functions.
[143] arXiv:2004.00363 (cross-list from cs.NI) [pdf, other]
Title: DNN-based Localization from Channel Estimates: Feature Design and Experimental Results
Comments: Submitted to IEEE SPAWC 2020
Subjects: Networking and Internet Architecture (cs.NI); Information Theory (cs.IT); Machine Learning (cs.LG); Signal Processing (eess.SP); Machine Learning (stat.ML)
We consider the use of deep neural networks (DNNs) in the context of channel state information (CSI)-based localization for Massive MIMO cellular systems. We discuss the practical impairments that are likely to be present in practical CSI estimates, and introduce a principled approach to feature design for CSI-based DNN applications based on the objective of making the features invariant to the considered impairments. We demonstrate the efficiency of this approach by applying it to a dataset constituted of geo-tagged CSI measured in an outdoors campus environment, and training a DNN to estimate the position of the UE on the basis of the CSI. We provide an experimental evaluation of several aspects of that learning approach, including localization accuracy, generalization capability, and data aging.
[144] arXiv:2004.00365 (cross-list from cs.NI) [pdf, other]
Title: An Outlook on the Interplay of AI and Software-Defined Metasurfaces
Subjects: Networking and Internet Architecture (cs.NI); Information Theory (cs.IT); Signal Processing (eess.SP)
Recent advances in programmable metasurfaces, also dubbed as software-defined metasurfaces (SDMs), are envisioned to offer a paradigm shift from uncontrollable to fully tunable and customizable wireless propagation environments, enabling a plethora of new applications and technological trends. Therefore, in view of this cutting edge technological concept, we first review the architecture and electromagnetic waves manipulation functionalities of SDMs. We then detail some of the recent advancements that have been made towards realizing these programmable functionalities in wireless communication applications. Furthermore, we elaborate on how artificial intelligence (AI) can address various constraints introduced by real-time deployment of SDMs, particularly in terms of latency, storage, energy efficiency, and computation. A review of the state-of-the-art research on the integration of AI with SDMs is presented, highlighting their potentials as well as challenges. Finally, the paper concludes by offering a look ahead towards unexplored possibilities of AI mechanisms in the context of SDMs.
[145] arXiv:2004.00392 (cross-list from eess.SY) [pdf]
Title: A novel LMI-based Method for Robust Stabilization of Fractional-order Interval Systems with $1\leα<2$
Comments: arXiv admin note: substantial text overlap with arXiv:1807.10827, arXiv:1701.05344
This paper deals with the problem of robust dynamic output feedback stabilization of interval fractional-order linear time invariant (FO-LTI) systems with the fractional order $1\le\alpha<2$. In this study, a new formulation based on the null-space analysis of the system matrices is proposed using linear matrix inequalities (LMIs). The applied uncertain model is the most complete model of linear interval systems, in which all of the systems matrices are interval matrices. A robust dynamic output feedback controller is designed that asymptotically stabilizes the interval FO-LTI system, where no limiting constraint is assumed on the state space matrices of the uncertain system. Eventually, a numerical example with simulations is presented to demonstrate the effectiveness and correctness of the theoretical results.
[146] arXiv:2004.00428 (cross-list from eess.SY) [pdf, ps, other]
Title: Stability and Instability Divergence Conditions for Dynamical Systems
Authors: Igor Furtat
Comments: arXiv admin note: text overlap with arXiv:2003.13002
A novel method for stability and instability study of autonomous dynamical systems using the flow and divergence of the vector field is proposed. A relation between the method of Lyapunov functions and the proposed method is established. Bendixon and Bendixon-Dulac theorems for $n$th dimensional systems are extended. Based on the proposed method, the state feedback control law is designed. The control signal is obtained from the partial differential inequality. The examples illustrate the application of the proposed method and the existing ones.
[147] arXiv:2004.00443 (cross-list from q-bio.PE) [pdf, other]
Title: Controlling the Transmission Dynamics of COVID-19
Comments: 14pages, 37 figures
Subjects: Populations and Evolution (q-bio.PE); Dynamical Systems (math.DS); Optimization and Control (math.OC)
The outbreak of COVID-19 caused by SARS-CoV-2 in Wuhan and other cities in China in 2019 has become a global pandemic as declared by the World Health Organization (WHO) in the first quarter of 2020. The delay in diagnosis, limited hospital resources, and other treatment resources lead to the rapid spread of COVID-19. In this article, we consider a dynamical model and analyze the impact of some control measures that can lead to the reduction of exposed and infectious individuals of the transmission of COVID-19. We investigate four control strategies diagnosis and spraying the environment to curb the transmission dynamics of the novel virus.
[148] arXiv:2004.00476 (cross-list from cs.NE) [pdf, ps, other]
[149] arXiv:2004.00509 (cross-list from cs.CR) [pdf, ps, other]
Title: On the privacy of a code-based single-server computational PIR scheme
We show that the single-server computational PIR protocol proposed by Holzbaur, Hollanti and Wachter-Zeh in 2020 is not private, in the sense that the server can recover in polynomial time the index of the desired file with very high probability. The attack relies on the following observation. Removing rows of the query matrix corresponding to the desired file yields a large decrease of the dimension over $\mathbb{F}_q$ of the vector space spanned by the rows of this punctured matrix. Such a dimension loss only shows up with negligible probability when rows unrelated to the requested file are deleted.
[150] arXiv:2004.00521 (cross-list from eess.SY) [pdf, ps, other]
Title: Stability and feasibility of neural network-based controllers via output range analysis
Comments: 8 pages, 4 figures
Neural networks can be used as approximations of several complex control schemes such as model predictive control. We show in this paper which properties deep neural networks with rectifier linear units as activation functions need to satisfy to guarantee constraint satisfaction and asymptotic stability of the closed-loop system. To do so, we introduce a parametric description of the neural network controller and use a mixed-integer linear programming formulation to perform output range analysis of neural networks. We also propose a novel method to modify a neural network controller such that it performs optimally in the LQR sense in a region surrounding the equilibrium. The proposed method enables the analysis and design of neural network controllers with formal safety guarantees as we illustrate with simulation results.
[151] arXiv:2004.00570 (cross-list from cs.LG) [pdf, ps, other]
Title: Tightened Convex Relaxations for Neural Network Robustness Certification
In this paper, we consider the problem of certifying the robustness of neural networks to perturbed and adversarial input data. Such certification is imperative for the application of neural networks in safety-critical decision-making and control systems. Certification techniques using convex optimization have been proposed, but they often suffer from relaxation errors that void the certificate. Our work exploits the structure of ReLU networks to improve relaxation errors through a novel partition-based certification procedure. The proposed method is proven to tighten existing linear programming relaxations, and asymptotically achieves zero relaxation error as the partition is made finer. We develop a finite partition that attains zero relaxation error and use the result to derive a tractable partitioning scheme that minimizes the worst-case relaxation error. Experiments using real data show that the partitioning procedure is able to issue robustness certificates in cases where prior methods fail. Consequently, partition-based certification procedures are found to provide an intuitive, effective, and theoretically justified method for tightening existing convex relaxation techniques.
[152] arXiv:2004.00589 (cross-list from eess.IV) [pdf, other]
Title: Robust Image Reconstruction with Misaligned Structural Information
Subjects: Image and Video Processing (eess.IV); Computer Vision and Pattern Recognition (cs.CV); Numerical Analysis (math.NA)
Multi-modality (or multi-channel) imaging is becoming increasingly important and more widely available, e.g. hyperspectral imaging in remote sensing, spectral CT in material sciences as well as multi-contrast MRI and PET-MR in medicine. Research in the last decades resulted in a plethora of mathematical methods to combine data from several modalities. State-of-the-art methods, often formulated as variational regularization, have shown to significantly improve image reconstruction both quantitatively and qualitatively. Almost all of these models rely on the assumption that the modalities are perfectly registered, which is not the case in most real world applications. We propose a variational framework which jointly performs reconstruction and registration, thereby overcoming this hurdle. Numerical results show the potential of the proposed strategy for various applications for hyperspectral imaging, PET-MR and multi-contrast MRI: typical misalignments between modalities such as rotations, translations, zooms can be effectively corrected during the reconstruction process. Therefore the proposed framework allows the robust exploitation of shared information across multiple modalities under real conditions.
Replacements for Thu, 2 Apr 20
[153] arXiv:1104.4415 (replaced) [pdf, ps, other]
Title: A necessary condition for generic rigidity of bar-and-joint frameworks in $d$-space
Comments: There was an error in the proof of Theorem 3.3(b) in version 1 of this paper. A weaker statement was proved in version 2 and then used to derive the main result Theorem 4.1 when $d\leq 5$. The proof technique was subsequently refined in collaboration with Hakan Guler to extend this result to all $d\leq 11$ in Theorem 3.3 of version 3
Subjects: Combinatorics (math.CO)
[154] arXiv:1201.0989 (replaced) [pdf, ps, other]
Title: The simplicial boundary of a CAT(0) cube complex
Authors: Mark F. Hagen
Comments: Lemma 3.18 was not stated correctly. This is fixed, and a minor adjustment to the beginning of the proof of Theorem 3.19 has been made as a result. Statements other than 3.18 do not need to change. I thank Abdul Zalloum for the correction
Journal-ref: Algebr. Geom. Topol. 13 (2013) 1299-1367
Subjects: Group Theory (math.GR); Combinatorics (math.CO)
[155] arXiv:1507.06441 (replaced) [pdf, ps, other]
Title: Scattering on periodic metric graphs
Comments: 42 pages, 5 figures
Subjects: Spectral Theory (math.SP)
[156] arXiv:1511.03362 (replaced) [pdf, ps, other]
Title: Hermitian Functional Representation of Free Lévy Processes
Subjects: Probability (math.PR)
[157] arXiv:1602.09052 (replaced) [pdf, other]
Title: On the Generalised Colouring Numbers of Graphs that Exclude a Fixed Minor
Comments: 21 pages, to appear in European Journal of Combinatorics
Subjects: Combinatorics (math.CO); Discrete Mathematics (cs.DM)
[158] arXiv:1606.00914 (replaced) [pdf, other]
Title: A Harder-Narasimhan theory for Kisin modules
Comments: 51 pages, 4 figures. To appear in Algebraic Geometry
[159] arXiv:1607.01412 (replaced) [pdf, ps, other]
Title: Pink's theory of Hodge structures and the Hodge conjecture over function fields
Comments: 114 pages, v2: final version published in "t-motives: Hodge structures, transcendence and other motivic aspects", Editors G. B\"ockle, D. Goss, U. Hartl, M. Papanikolas, European Mathematical Society Congress Reports 2020
Subjects: Number Theory (math.NT)
[160] arXiv:1611.01999 (replaced) [pdf, other]
Title: A probabilistic model for the distribution of ranks of elliptic curves over $\mathbb{Q}$
Comments: This third version is a major revision of the paper after a wonderful referee report that had many suggestions and comments. I would like to express my gratitude to the anonymous referee
Subjects: Number Theory (math.NT)
[161] arXiv:1611.09513 (replaced) [pdf, other]
Title: New examples of extremal positive linear maps
Comments: This is the version, accepted for publication in Linear Algebra and Its Applications, DOI: 10.1016/j.laa.2020.03.033. A section on extensions of the examples to complex extremal positive maps is added to the previous version
Subjects: Rings and Algebras (math.RA); Algebraic Geometry (math.AG); Functional Analysis (math.FA)
[162] arXiv:1611.10278 (replaced) [pdf, ps, other]
Title: On the Kodaira dimension of maximal orders
Comments: Substantial revision with new results and errors in v2 fixed
Subjects: Algebraic Geometry (math.AG)
[163] arXiv:1703.00885 (replaced) [pdf, ps, other]
Title: Gowers norms control diophantine inequalities
Authors: Aled Walker
Comments: 75 pages. Reworked introduction based on referee comments. To appear in Algebra & Number Theory
Subjects: Number Theory (math.NT); Combinatorics (math.CO)
[164] arXiv:1704.04244 (replaced) [pdf]
Title: General three person two color Hat Game
Authors: Theo van Uem
Comments: 7 pages. v1 is about three and four players and is incorrect; v2 is a modified version only about three players. arXiv admin note: substantial text overlap with arXiv:1612.00276, arXiv:1612.05924; v3: modifications in 2.2 and 2.3; v4: modifications in 2.2 and 2.3, new section 2.4
Subjects: Combinatorics (math.CO); Discrete Mathematics (cs.DM); Information Theory (cs.IT)
[165] arXiv:1705.04771 (replaced) [pdf, ps, other]
Title: The Laplace Transform of the Second Moment in the Gauss Circle Problem
Comments: Incorporate referees' comments
Subjects: Number Theory (math.NT)
[166] arXiv:1705.05079 (replaced) [pdf, other]
Title: Real-analytic realization of Uniform Circular Systems and some applications
Comments: 47 pages. arXiv admin note: text overlap with arXiv:1508.00627 by other authors
Subjects: Dynamical Systems (math.DS)
[167] arXiv:1705.09705 (replaced) [pdf, ps, other]
Title: Random Products of Standard Maps
Comments: References updated. 33 pages. To be published in CMP
Subjects: Dynamical Systems (math.DS)
[168] arXiv:1709.02760 (replaced) [pdf, ps, other]
Title: Exact solvability and asymptotic aspects of generalized XX0 spin chains
Comments: 25 pages, 1 figure. Restructured and typos corrected. To be published in Physica A
[169] arXiv:1711.10206 (replaced) [pdf, ps, other]
Title: Quantifying Quillen's Uniform $\mathcal{F}_p$-isomorphism Theorem
Comments: 17 pages, exposition revised
Journal-ref: Homology, Homotopy and Applications, vol. 22 (2020), no. 2, pp. 73 -- 90
Subjects: Algebraic Topology (math.AT)
[170] arXiv:1712.03662 (replaced) [pdf, ps, other]
Title: A new cohomology class on the moduli space of curves
Authors: Paul Norbury
Comments: 48 pages, revised paper, new proof of KdV theorem
[171] arXiv:1801.00380 (replaced) [pdf, other]
Title: On Variable Ordination of Modified Cholesky Decomposition for Sparse Covariance Matrix Estimation
Subjects: Statistics Theory (math.ST)
[172] arXiv:1802.03081 (replaced) [pdf, ps, other]
Title: An optimal bound for the ratio between ordinary and uniform exponents of Diophantine approximation
Subjects: Number Theory (math.NT)
[173] arXiv:1804.00903 (replaced) [pdf, other]
Title: Sign changing solutions of Poisson's equation
Comments: 27 pages, 2 figures, various minor typos have been corrected
Subjects: Analysis of PDEs (math.AP)
[174] arXiv:1805.03778 (replaced) [pdf, ps, other]
Title: Threshold functions for substructures in random subsets of finite vector spaces
Comments: 23 pages. This version addresses referees' comments
Subjects: Combinatorics (math.CO)
[175] arXiv:1805.04006 (replaced) [pdf, other]
Title: Finite Element Approximation of a Strain-Limiting Elastic Model
Comments: [v3] modifications / simplifications in the proof of Theorem 7.1
Subjects: Numerical Analysis (math.NA)
[176] arXiv:1805.04175 (replaced) [pdf, ps, other]
Title: The Cavender-Farris-Neyman Model with a Molecular Clock
Comments: 54 pages, 19 figures
Subjects: Algebraic Geometry (math.AG); Commutative Algebra (math.AC); Combinatorics (math.CO); Populations and Evolution (q-bio.PE)
[177] arXiv:1806.06433 (replaced) [pdf, ps, other]
Title: The component structure of dense random subgraphs of the hypercube
Subjects: Combinatorics (math.CO)
[178] arXiv:1808.06714 (replaced) [pdf, other]
Title: Cluster Gauss-Newton method for finding multiple approximate minimisers of nonlinear least squares problems with applications to parameter estimation of pharmacokinetic models
Subjects: Numerical Analysis (math.NA)
[179] arXiv:1808.10747 (replaced) [pdf, other]
Title: Geometry of the Phase Retrieval Problem
Comments: This is a substantial revision of version 1
Subjects: Numerical Analysis (math.NA); Image and Video Processing (eess.IV); Mathematical Physics (math-ph); Differential Geometry (math.DG)
[180] arXiv:1810.00928 (replaced) [pdf, ps, other]
Title: Stacky dualities for the moduli of Higgs bundles
Comments: 43 pages; added a section on background for the physical conjecture to the introduction, fixed typos, clarified and expanded certain proofs (results unchanged). To appear in Adv. Math
[181] arXiv:1810.02238 (replaced) [pdf, ps, other]
Title: Super-multiplicativity of ideal norms in number fields
Comments: Final version. The content is the same as the "Online First" version published on the journal's web site
Journal-ref: Acta Arith. 193 (2020), no. 1, 75-93
Subjects: Number Theory (math.NT)
[182] arXiv:1810.06005 (replaced) [pdf, ps, other]
Title: Note on Toda brackets
Subjects: Algebraic Topology (math.AT)
[183] arXiv:1810.06022 (replaced) [pdf, other]
Title: Rethinking the Reynolds Transport Theorem, Liouville Equation, and Perron-Frobenius and Koopman Operators
Comments: 7 figure files
Subjects: Fluid Dynamics (physics.flu-dyn); Probability (math.PR); Classical Physics (physics.class-ph)
[184] arXiv:1810.07360 (replaced) [pdf, ps, other]
Title: Disjointness of Möbius from asymptotically periodic functions
Authors: Fei Wei
Comments: 35 pages
Subjects: Number Theory (math.NT)
[185] arXiv:1812.07300 (replaced) [pdf, ps, other]
Title: New parameterized solution with application to bounding secondary variables in finite element models of structures
Authors: Evgenija D. Popova (Institute of Mathematics and Informatics, Bulgarian Academy of Sciences)
Comments: 29 pages, 5 Postscript figures
Journal-ref: Applied Mathematics and Computation, 378 (2020) 125205
Subjects: Numerical Analysis (math.NA)
[186] arXiv:1901.02974 (replaced) [pdf, other]
Title: Complex oscillatory patterns near singular Hopf bifurcation in a two time-scale ecosystem
Authors: Susmita Sadhu
Subjects: Dynamical Systems (math.DS)
[187] arXiv:1901.04823 (replaced) [pdf, ps, other]
Title: On the maximal operator of a general Ornstein-Uhlenbeck semigroup
Comments: 21 pages. Introduction revised. Some changes in Sections 3 and 4
Subjects: Functional Analysis (math.FA)
[188] arXiv:1901.07443 (replaced) [pdf, ps, other]
Title: The $h^*$-polynomial of the order polytope of the zig-zag poset
Comments: 18 pages, 2 figures
Subjects: Combinatorics (math.CO); Commutative Algebra (math.AC)
[189] arXiv:1902.08885 (replaced) [pdf, other]
Title: De-Biasing The Lasso With Degrees-of-Freedom Adjustment
[190] arXiv:1903.09969 (replaced) [pdf, ps, other]
Title: Asymptotic properties of steady and nonsteady solutions to the 2D Navier-Stokes equations with finite generalized Dirichlet integral
Comments: 14 pages. The result of the asymptotic behavior of the derivative of velocity is improved
Subjects: Analysis of PDEs (math.AP)
[191] arXiv:1903.10837 (replaced) [pdf, ps, other]
Title: Exploiting Computation Replication for Mobile Edge Computing: A Fundamental Computation-Communication Tradeoff Study
Comments: To appear in IEEE Transactions on Wireless Communications
Subjects: Information Theory (cs.IT)
[192] arXiv:1904.11665 (replaced) [pdf, other]
Title: Rapid evaluation of the spectral signal detection threshold and Stieltjes transform
Authors: William Leeb
Subjects: Computation (stat.CO); Numerical Analysis (math.NA); Statistics Theory (math.ST)
[193] arXiv:1905.01332 (replaced) [pdf, other]
Title: A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization
Comments: 42 pages, 7 figures, 5 tables
Subjects: Optimization and Control (math.OC)
[194] arXiv:1905.01785 (replaced) [pdf, other]
Title: The gradient discretisation method for slow and fast diffusion porous media equations
Subjects: Numerical Analysis (math.NA)
[195] arXiv:1905.03514 (replaced) [pdf, ps, other]
Title: Well-posedness and Long-time Behaviour for a Nonlinear Parabolic Equation with Hysteresis
Comments: 23 pages
Subjects: Analysis of PDEs (math.AP)
[196] arXiv:1905.08144 (replaced) [pdf, other]
Title: Incongruent equipartitions of the plane
Comments: 12 pages, 9 figures. European Journal of Combinatorics (in press)
Subjects: Combinatorics (math.CO); Metric Geometry (math.MG)
[197] arXiv:1905.11373 (replaced) [pdf, other]
Title: Revisiting Stochastic Extragradient
Comments: Accepted to AISTATS 2020. 16 pages, 9 figures, 2 algorithms
[198] arXiv:1906.00481 (replaced) [pdf, other]
Title: Logarithmic concavity for morphisms of matroids
Comments: 16 page. Minor edits
Subjects: Combinatorics (math.CO); Algebraic Geometry (math.AG)
[199] arXiv:1906.00611 (replaced) [pdf, ps, other]
Title: Upper Estimates for Electronic Density in Heavy Atoms and Molecules
Authors: Victor Ivrii
Comments: 19 pp
[200] arXiv:1906.03754 (replaced) [pdf, other]
Title: A sequential least squares method for elliptic equations in non-divergence form
Authors: Ruo Li, Fanyi Yang
Subjects: Numerical Analysis (math.NA)
[201] arXiv:1906.03791 (replaced) [pdf, other]
Title: Randomization and reweighted $\ell_1$-minimization for A-optimal design of linear inverse problems
Comments: 27 Pages; Accepted for publication in SIAM Journal on Scientific Computing
Subjects: Numerical Analysis (math.NA); Computation (stat.CO)
[202] arXiv:1906.04939 (replaced) [pdf, other]
Title: Geometric Approach to Quantum Theory
Authors: Albert Schwarz
Journal-ref: SIGMA 16 (2020), 020, 3 pages
[203] arXiv:1906.09128 (replaced) [pdf, other]
Title: Finite Element Systems for vector bundles : elasticity and curvature
Comments: 45 pages, 6 figures. v2: 49 pages, details and references added
Subjects: Numerical Analysis (math.NA)
[204] arXiv:1906.11293 (replaced) [pdf, other]
Title: Empirical Process Results for Exchangeable Arrays
Comments: main paper until page 28, then supplement. The paper supersedes our previous paper "Asymptotic results under multiway clustering"
Subjects: Statistics Theory (math.ST); Econometrics (econ.EM)
[205] arXiv:1906.11857 (replaced) [pdf, other]
Title: Functional relations for elliptic polylogarithms
Comments: 39 pages, 5 appendices, added references and corrected typos
Journal-ref: Broedel et al 2020 J. Phys. A: Math. Theor
[206] arXiv:1907.00933 (replaced) [pdf, ps, other]
Title: Biased permutative equivariant categories
Comments: To appear in HHA
[207] arXiv:1907.00961 (replaced) [pdf, other]
Title: On the development of symmetry-preserving finite element schemes for ordinary differential equations
Subjects: Numerical Analysis (math.NA)
[208] arXiv:1907.02912 (replaced) [pdf, ps, other]
Title: On Finite Exchangeability and Conditional Independence
Authors: Kayvan Sadeghi
Comments: 25 pages, 2 figures
Subjects: Statistics Theory (math.ST); Probability (math.PR); Other Statistics (stat.OT)
[209] arXiv:1907.05799 (replaced) [pdf, ps, other]
Title: Convergent discretisation schemes for transition path theory for diffusion processes
Comments: 25 pages, 4 figures; significantly revised version, new numerical results
Subjects: Numerical Analysis (math.NA)
[210] arXiv:1907.10049 (replaced) [pdf, other]
Title: Haldane's formula in Cannings models: The case of moderately weak selection
Comments: In this version, Theorem 3.5 (Haldane's formula) is extended to a wider range of selection parameters. The proof is given in the new Section 7
Subjects: Probability (math.PR)
[211] arXiv:1908.00346 (replaced) [pdf, other]
Title: Phase transitions and percolation at criticality in planar enhanced random connection models
Comments: 29 pages, 13 figures, proofs of some results are revised
Subjects: Probability (math.PR)
[212] arXiv:1908.00944 (replaced) [pdf, ps, other]
Title: Positive scalar curvature on manifolds with odd order abelian fundamental groups
Authors: Bernhard Hanke
Comments: 30 pages; 2 figures; revision following referee's suggestions; to appear in Geometry&Topology
[213] arXiv:1908.00978 (replaced) [pdf, ps, other]
Title: Finding Dominating Induced Matchings in $P_9$-Free Graphs in Polynomial Time
Comments: arXiv admin note: substantial text overlap with arXiv:1905.05582, arXiv:1706.09301, arXiv:1706.04894
Subjects: Discrete Mathematics (cs.DM); Combinatorics (math.CO)
[214] arXiv:1908.07364 (replaced) [pdf, ps, other]
Title: Colored five-vertex models and Lascoux polynomials and atoms
Comments: 23 pages, 7 figures; added examples and other changes from referee report
[215] arXiv:1908.07814 (replaced) [pdf, ps, other]
Title: Quasi-local Algebras and Asymptotic Expanders
Comments: Accepted by Groups, Geometry and Dynamics
[216] arXiv:1908.11409 (replaced) [pdf, ps, other]
Title: Hodge--Gromov--Witten theory
Authors: Jérémy Guéré
Subjects: Algebraic Geometry (math.AG)
[217] arXiv:1909.01346 (replaced) [pdf, other]
Title: Entanglement spectrum and entropy in topological non-Hermitian systems and non-unitary conformal field theories
Comments: 17 pages, 11 figures
[218] arXiv:1909.01758 (replaced) [pdf, ps, other]
Title: Value Iteration Algorithm for Mean-field Games
Comments: 23 pages
[219] arXiv:1909.09352 (replaced) [pdf, other]
Title: On the regularity of Cauchy hypersurfaces and temporal functions in closed cone structures
Authors: E. Minguzzi
Comments: 17 pages. v2: updated some references, fixed some typos
[220] arXiv:1909.11726 (replaced) [pdf, ps, other]
Title: A variant of Schur's product theorem and its applications
Authors: Jan Vybíral
Subjects: Numerical Analysis (math.NA)
[221] arXiv:1909.13676 (replaced) [pdf, other]
Title: Optimal Algorithms for Submodular Maximization with Distributed Constraints
[222] arXiv:1911.01183 (replaced) [pdf, ps, other]
Title: Blow up of fractional Schrödinger equations on manifolds with nonnegative Ricci curvature
Comments: 15 pages. Welcome all comments
Subjects: Analysis of PDEs (math.AP)
[223] arXiv:1911.03510 (replaced) [pdf, ps, other]
Title: Thomas-Fermi approximation to electronic density
Authors: Victor Ivrii
Comments: 10 pp
[224] arXiv:1911.03569 (replaced) [pdf, other]
Title: The matroid stratification of the Hilbert scheme of points in P^1
Authors: Rob Silversmith
Comments: Since the previous version, the paper has been significantly shortened, simplified, clarified, rearranged, and made more consistent with tropical geometry literature. A much simpler proof of Theorem 5.11 is given. Comments welcome!
[225] arXiv:1911.05498 (replaced) [pdf, other]
Title: Superiorization vs. Accelerated Convex Optimization: The Superiorized/Regularized Least-Squares Case
[226] arXiv:1911.07001 (replaced) [pdf, ps, other]
Title: Étude des opérateurs d'évolution en caractéristique 2
Authors: Richard Varro
Comments: in French
[227] arXiv:1911.07029 (replaced) [pdf, other]
Title: On the Age of Information in Multi-Source Queueing Models
Comments: 32 pages, 15 figures
Subjects: Information Theory (cs.IT)
[228] arXiv:1912.00901 (replaced) [pdf, ps, other]
Title: Hopf-Galois structures on extensions of degree $p^{2} q$ and skew braces of order $p^{2} q$: the cyclic Sylow $p$-subgroup case
Comments: 43 pages
Subjects: Rings and Algebras (math.RA); Group Theory (math.GR); Number Theory (math.NT)
[229] arXiv:1912.02237 (replaced) [pdf, ps, other]
Title: Max-compound Cox processes. III
Comments: 16 pages
Subjects: Probability (math.PR)
[230] arXiv:1912.03550 (replaced) [pdf, other]
Title: Minimax Adaptive Control for State Matrix with Unknown Sign
Authors: Anders Rantzer
Subjects: Optimization and Control (math.OC)
[231] arXiv:1912.07152 (replaced) [pdf, other]
Title: Topology Identification with Latent Nodes using Matrix Decomposition
Comments: 15 pages, 7 figures
[232] arXiv:1912.07920 (replaced) [pdf, ps, other]
Title: $Λ$-adic Families of Jacobi Forms
Comments: 22 pages, to appear in Research in Number Theory
Subjects: Number Theory (math.NT)
[233] arXiv:1912.08170 (replaced) [pdf, ps, other]
Title: Consensus seeking gradient descent flows on boundaries of convex sets
Authors: Johan Markdahl
[234] arXiv:1912.10535 (replaced) [pdf, ps, other]
Title: A graph-theoretic criterion for absolute irreducibility of integer-valued polynomials with square-free denominator
Comments: To appear in Comm. Algebra
Subjects: Commutative Algebra (math.AC)
[235] arXiv:1912.11497 (replaced) [pdf, ps, other]
Title: Nested algebraic Bethe ansatz for deformed orthogonal and symplectic spin chains
Comments: 22 pages, v2: final version
[236] arXiv:1912.13334 (replaced) [pdf, ps, other]
Title: Comparing demographics of signatories to public letters on diversity in the mathematical sciences
Comments: 21 pages, 2 tables, 2 figures; minor textual edits made to previous version
Subjects: History and Overview (math.HO)
[237] arXiv:2001.02425 (replaced) [pdf, other]
Title: A simple symmetric exclusion process driven by an asymmetric tracer particle
Authors: Arvind Ayyer
Comments: 28 pages, 3 figures, minor improvements, added references
[238] arXiv:2001.03327 (replaced) [pdf, ps, other]
Title: How to Cut a Cake Fairly: A Generalization to Groups
Subjects: Theoretical Economics (econ.TH); Combinatorics (math.CO)
[239] arXiv:2001.04337 (replaced) [pdf, ps, other]
Title: Congruences for coefficients of modular functions in levels 3, 5, and 7 with poles at 0
Comments: arXiv admin note: text overlap with arXiv:1709.10189 Version 2: Corrected typos
Subjects: Number Theory (math.NT)
[240] arXiv:2001.07368 (replaced) [pdf, ps, other]
Title: Hardy inequalities with double singular weights
Subjects: Analysis of PDEs (math.AP)
[241] arXiv:2001.07468 (replaced) [pdf, other]
Title: Stieltjes continued fractions related to the Paperfolding sequence and Rudin-Shapiro sequence
Authors: Wen Wu
Comments: 23 pages, 4 figures
Subjects: Number Theory (math.NT)
[242] arXiv:2001.07753 (replaced) [pdf, ps, other]
Title: Strong solutions of forward-backward stochastic differential equations with measurable coefficients
Comments: This is an improved and shorter version of a paper first posted with the title "Probabilistic approach to quasilinear PDEs with measurable coefficients"
[243] arXiv:2001.07805 (replaced) [pdf, other]
Title: When does the Tukey median work?
Subjects: Statistics Theory (math.ST); Machine Learning (cs.LG); Signal Processing (eess.SP); Machine Learning (stat.ML)
[244] arXiv:2001.09042 (replaced) [pdf, ps, other]
Title: Strong approximation of Gaussian beta-ensemble characteristic polynomials: the hyperbolic regime
Comments: Version 2. Improved the dependence on the parameter Omega in the main theorem to allow superpolynomial error rates. Added a connection between the Gaussian field approximation and the central limit theorem for linear statistics
Subjects: Probability (math.PR)
[245] arXiv:2002.01005 (replaced) [pdf, other]
Title: On rational electromagnetic fields
Comments: 1+14 pages LaTeX, 4 figures in 6 parts; v2: additional j=1 knot solution, discussion and illustration of energy densitites and field lines, 3 more refs., published in PLA
[246] arXiv:2002.01024 (replaced) [pdf, other]
Title: Congruent number triangles with the same hypotenuse
Authors: David Lowry-Duda
Comments: 7 pages; corrected statement of conjecture
Subjects: Number Theory (math.NT)
[247] arXiv:2002.01174 (replaced) [pdf, ps, other]
Title: On linearised vacuum constraint equations on Einstein manifolds
Comments: Minor additions and inaccuracies removed
[248] arXiv:2002.03929 (replaced) [pdf, ps, other]
Title: ${\cal N}{=}\,4$ supersymmetric Calogero-Sutherland models
Comments: 1+6 pages; v2: to be published in PRD
[249] arXiv:2002.08612 (replaced) [pdf, other]
Title: Collapsed Ricci limit spaces as non-collapsed $RCD$ spaces
Authors: Shouhei Honda
Comments: Dedicated to Misha Gromov's 75th birthday
Journal-ref: SIGMA 16 (2020), 021, 10 pages
[250] arXiv:2002.09791 (replaced) [pdf, ps, other]
Title: Self-similarity and spectral dynamics
[251] arXiv:2002.10874 (replaced) [pdf, other]
Title: The moduli space of tropical curves with fixed Newton polygon
Comments: 24 pages, 18 figures
Subjects: Algebraic Geometry (math.AG); Combinatorics (math.CO)
[252] arXiv:2003.00599 (replaced) [pdf, other]
Title: Regularity results for shortest billiard trajectories in convex bodies in $\mathbb{R}^n$
Comments: 25 pages, 9 figures, justification of Examples E and F revised, results unchanged
Subjects: Dynamical Systems (math.DS)
[253] arXiv:2003.01574 (replaced) [pdf, ps, other]
Title: A quadratic identity in the shuffle algebra and an alternative proof for de Bruijn's formula
Comments: 27 pages
[254] arXiv:2003.03139 (replaced) [pdf, ps, other]
Title: Sharp Interface Limit of a Stokes/Cahn-Hilliard System, Part I: Convergence Result
Comments: 50 pages
[255] arXiv:2003.03823 (replaced) [pdf, ps, other]
Title: On Adiabatic Oscillations of a Stratified Atmosphere on the Flat Earth
Authors: Tetu Makino
Comments: Discussion on the absence of continuous spectrum added
Subjects: Analysis of PDEs (math.AP)
[256] arXiv:2003.04082 (replaced) [pdf, other]
Title: Weyl semimetals and spin$^c$ cobordism
Authors: Ümit Ertem
Comments: 12 pages, minor modifications
[257] arXiv:2003.04595 (replaced) [pdf, other]
Title: Nonlinear Power Method for Computing Eigenvectors of Proximal Operators and Neural Networks
Subjects: Spectral Theory (math.SP); Optimization and Control (math.OC)
[258] arXiv:2003.05399 (replaced) [pdf, ps, other]
Title: Hamiltonian structures for integrable hierarchies of Lagrangian PDEs
Authors: Mats Vermeeren
[259] arXiv:2003.06663 (replaced) [pdf, ps, other]
Title: On the classification of topological orders
Comments: 28 pages. v2 contains small improvements
Subjects: Category Theory (math.CT); Strongly Correlated Electrons (cond-mat.str-el); Quantum Algebra (math.QA)
[260] arXiv:2003.06926 (replaced) [pdf, other]
Title: Stochastic gradient descent with random learning rate
Authors: Daniele Musso
Comments: 13 pages, 12 figures. v2: added section on related work
Subjects: Machine Learning (cs.LG); Optimization and Control (math.OC); Data Analysis, Statistics and Probability (physics.data-an); Machine Learning (stat.ML)
[261] arXiv:2003.07350 (replaced) [pdf, ps, other]
Title: A fractional Laplacian problem with mixed singular nonlinearities and nonregular data
Comments: We are grateful for any feedback or comments. arXiv admin note: text overlap with arXiv:1910.04716
Subjects: Analysis of PDEs (math.AP)
[262] arXiv:2003.07809 (replaced) [pdf, ps, other]
Title: Some new rational Gushel fourfolds
Comments: Some other construction of rational GM fourfolds in (M_4)_26" has been added
Subjects: Algebraic Geometry (math.AG)
[263] arXiv:2003.09147 (replaced) [pdf, ps, other]
Title: Analogues of Switching Subgradient Schemesfor Relatively Lipschitz-Continuous Convex Programming Problems
Subjects: Optimization and Control (math.OC)
[264] arXiv:2003.09911 (replaced) [pdf, ps, other]
Title: The talented monoid of a directed graph with applications to graph algebras
Comments: 29 pages; minor corrections from first version
Subjects: Rings and Algebras (math.RA)
[265] arXiv:2003.10708 (replaced) [pdf, ps, other]
Title: Compatibility between non-Kähler structures on complex (nil)manifolds
Comments: 13 pages. v2: Reorganized section 3; Minor additions and corrections
Subjects: Differential Geometry (math.DG)
[266] arXiv:2003.10973 (replaced) [pdf, ps, other]
Title: Finite-Time Analysis of Stochastic Gradient Descent under Markov Randomness
[267] arXiv:2003.12174 (replaced) [pdf, ps, other]
Title: On the $8π$-subcritical mass threshold of a Patlak-Keller-Segel-Navier-Stokes system
Authors: Yishu Gong, Siming He
Subjects: Analysis of PDEs (math.AP)
[268] arXiv:2003.13199 (replaced) [pdf, ps, other]
Title: A note on Onicescu's informational energy and correlation coefficient in exponential families
Authors: Frank Nielsen
Comments: 13 pages
Subjects: Information Theory (cs.IT)
[269] arXiv:2003.13367 (replaced) [pdf, other]
Title: Neural Communication Systems with Bandwidth-limited Channel
[270] arXiv:2003.13372 (replaced) [pdf, ps, other]
Title: Face numbers of uniform triangulations of simplicial complexes
Comments: 25 pages, one figure; minor changes
Subjects: Combinatorics (math.CO)
[271] arXiv:2003.13572 (replaced) [pdf, other]
Title: Dominating surface-group representations into $\mathrm{PSL}_2 (\mathbb{C})$ in the relative representation variety
Comments: 15 pages, 3 figures, v2 corrects our handling of the degenerate and co-axial case
Subjects: Geometric Topology (math.GT); Representation Theory (math.RT)
[272] arXiv:2003.13810 (replaced) [pdf, ps, other]
Title: Mean-field limit of Age and Leaky memory dependent Hawkes processes
Authors: Valentin Schmutz
Subjects: Probability (math.PR)
[273] arXiv:2003.13909 (replaced) [pdf, other]
Title: Intelligent Reflecting Surface-Aided Joint Processing Coordinated Multipoint Transmission
Comments: This is preprint version submitted to IEEE journal for possible publication
[274] arXiv:2003.14153 (replaced) [pdf, other]
Title: $3n+1$ problem: an heuristic lower bound for the number of integers connected to 1 and less than $x$
Subjects: Number Theory (math.NT)
[275] arXiv:2003.14379 (replaced) [pdf, ps, other]
Title: Log abundance of the moduli b-divisors of lc-trivial fibrations
Authors: Zhengyu Hu
Comments: 49 pages. The original Section 4 is removed to simplify the proof. All comments are welcome
Subjects: Algebraic Geometry (math.AG)
[ total of 275 entries: 1-275 ]
Disable MathJax (What is MathJax?)
Links to: arXiv, form interface, find, math, recent, 2004, contact, help (Access key information) |
3e3a748cb98732c3 | Astrophysics Source Code Library
Making codes discoverable since 1999
Searching for codes credited to 'Tennyson, Jonathan'
[ascl:1605.014] DUO: Spectra of diatomic molecules
Duo computes rotational, rovibrational and rovibronic spectra of diatomic molecules. The software, written in Fortran 2003, solves the Schrödinger equation for the motion of the nuclei for the simple case of uncoupled, isolated electronic states and also for the general case of an arbitrary number and type of couplings between electronic states. Possible couplings include spin–orbit, angular momenta, spin-rotational and spin–spin. Introducing the relevant couplings using so-called Born–Oppenheimer breakdown curves can correct non-adiabatic effects.
[ascl:1803.014] ExoCross: Spectra from molecular line lists
ExoCross generates spectra and thermodynamic properties from molecular line lists in ExoMol, HITRAN, or several other formats. The code is parallelized and also shows a high degree of vectorization; it works with line profiles such as Doppler, Lorentzian and Voigt and supports several broadening schemes. ExoCross is also capable of working with the recently proposed method of super-lines. It supports calculations of lifetimes, cooling functions, specific heats and other properties. ExoCross converts between different formats, such as HITRAN, ExoMol and Phoenix, and simulates non-LTE spectra using a simple two-temperature approach. Different electronic, vibronic or vibrational bands can be simulated separately using an efficient filtering scheme based on the quantum numbers. |
0c9caaa3d5e8fce9 | Nonlocality in the de Broglie-Bohm Interpretation of Quantum Mechanics
Initializing live version
Download to Desktop
Requires a Wolfram Notebook System
One of the counterintuitive features of quantum mechanics is the phenomenon of nonlocality. In simple terms, this implies that, in some circumstances, particles that have interacted at some initial time and then become spatially separated remain "entangled", such that a measurement on one particle affects the other instantaneously, no matter how large the separation of the two particles has become.
The standard interpretation of quantum mechanics and the de Broglie–Bohm interpretation are both consistent with experimental evidence. But it is useful to understand nonlocality if one visualizes particle trajectories rather than collapse of the wavefunction. In the trajectory approach, a measurement of one particle could lead to an incorrect prediction of the trajectory (or "surreal trajectories") of the entangled particles. The surreal trajectory is a consequence of nonlocality in which the particles are able to influence one another instantaneously.
This Demonstration considers the motion of two orthogonal one-dimensional entangled particles in a Calogero–Moser potential with constant phase shift. It shows that measurement of the initial starting position of one particle affects the trajectory of the other particle. The motion of the two particles in two-dimensional configuration space might be described by a single trajectory, in which the motion is local. The projection of the trajectory onto two-dimensional configuration space leads to a decomposition of two spatially divided motions in one-dimensional real space, in which the motion becomes entangled and where quantum entanglement becomes equivalent to quantum nonlocality. Here, the configuration space represents a projection from real space, which can simulate quantum nonlocality. If the motion is entangled, chaotic, or ergodic, motion often results. Measurement of the particle position is determined by its initial choice.
Mathematically, entanglement occurs if factorizability of the wavefunction is not possible; for example, when quantum superposition produces a product state, the motion becomes entangled. This means that the motion in one coordinate direction depends also on the other coordinate directions, whether the motion is periodic or not. The component motions are independent only if the wavefunction is factorizable in configuration space.
The Demonstration shows the motion in configuration space, real space, and phase space, in which the phase space consists of all possible values of position and velocity variables. The degree of entanglement is represented by the parameter
For , the wavefunction is factorizable in configuration space. The motion is periodic, and the particles behave independently of one another. The initial starting position of one particle does not affect the motion of the other particle.
For , the motion of the two particles becomes entangled and chaotic, depending on the constant phase shift . The initial starting position of a particle affects the motion of the other particle in real space.
The graphic shows the trajectory in configuration space (red) and the motion of the two particles in one-dimensional real space (green and blue). If the phase space button is activated, these are shown: the and components of the velocity in configuration space (red), the position in the direction, the component of the velocity of particle 1 (blue) and the position in the direction, and the component of the velocity of particle 2 (green).
Contributed by:Klaus von Bloh (March 2016)
Open content licensed under CC BY-NC-SA
Consider the Schrödinger equation
, with , , , , and so on. The solution involves associated Laguerre polynomials. An entangled, un-normalized wavefunction for two one-dimensional particles, which cannot move along the entire and axes but are constrained to remain on the half and axes, can be expressed by a superposition state with a special parameter :
where , are eigenfunctions, and are permuted eigenenergies of the corresponding stationary one-dimensional Schrödinger equation, with and . The eigenfunctions , are given by
, with and ,
where , are associated Laguerre polynomials, and are the quantum numbers with . The wavefunction is taken from [1].
For , the velocity vector in one coordinate direction does not depend on the other coordinate direction:
and ,
from which the trajectory is calculated, and where and are the initial starting positions, which can be freely chosen for numerical integration of the velocity vector. The initial starting positions (particle 1) and (particle 2) can be changed by using the controls. For , , and , both components of the velocity are equal, which produces a straight line in configuration space.
For the special case , the trajectory becomes periodic, depending only on the constant phase shift in the term.
In the program, if PlotPoints, AccuracyGoal, PrecisionGoal, and MaxSteps are increased (if enabled), the results will be more accurate.
[1] M. Trott, The Mathematica GuideBook for Symbolics, New York: Springer, 2006.
[2] B.-G. Englert, M. O. Scully, G. Süssman, and H. Walther, "Surrealistic Bohm Trajectories," Zeitschrift für Naturforschung A, 47(12), 1992 pp. 1175–1186.
[3] "" (Mar 16, 2016)
[4] S. Goldstein, "Bohmian Mechanics," The Stanford Encyclopedia of Philosophy. (Mar 16, 2016)
[5] S. Kocsis, B. Braverman, S. Ravets, M. J. Stevens, R. P. Mirin, L. Krister Shalm, and A. M. Steinberg, "Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer," Science, 332(6034), 2011 pp. 1170–1173. doi:10.1126/science.1202218.
[6] D. H. Mahler, L. Rozema, K. Fisher, L. Vermeyden, K. J. Resch, H. M. Wiseman, and A. Steinberg,"Experimental Nonlocal and Surreal Bohmian Trajectories," Science Advances, 2(2), 2016 pp. 1–7. doi:10.1126/science.1501466.
Feedback (field required)
Email (field required) Name
Occupation Organization |
726a41c6f9cb021d | /at"euhm/, n.
1. Physics.
b. an atom with one of the electrons replaced by some other particle: muonic atom; kaonic atom.
2. Energy. this component as the source of nuclear energy.
3. a hypothetical particle of matter so minute as to admit of no division.
4. anything extremely small; a minute quantity.
[1350-1400; ME attomos, athomus < L atomus < Gk átomos, n. use of átomos undivided, equiv. to a- A-6 + tomós divided, verbid of témnein to cut]
Syn. 4. shred, speck, scintilla, iota, jot, whit.
* * *
Smallest unit into which matter can be divided and still retain the characteristic properties of an element.
The word derives from the Greek atomos ("indivisible"), and the atom was believed to be indivisible until the early 20th century, when electrons and the nucleus were discovered. It is now known that an atom has a positively charged nucleus that makes up more than 99.9% of the atom's mass but only about 10-14 (less than a trillionth) of its volume. The nucleus is composed of positively charged protons and electrically neutral neutrons, each about 2,000 times as massive as an electron. Most of the atom's volume consists of a cloud of electrons that have very small mass and negative charge. The electron cloud is bound to the nucleus by the attraction of opposite charges. In a neutral atom, the protons in the nucleus are balanced by the electrons. An atom that has gained or lost electrons becomes negatively or positively charged and is called an ion.
* * *
Most of the atom is empty space. The rest consists of a positively charged nucleus of protons (proton) and neutrons (neutron) surrounded by a cloud of negatively charged electrons (electron). The nucleus is small and dense compared with the electrons, which are the lightest charged particles in nature. Electrons are attracted to any positive charge by their electric force; in an atom, electric forces bind the electrons to the nucleus.
Because of the nature of quantum mechanics, no single image has been entirely satisfactory at visualizing the atom's various characteristics, which thus forces physicists to use complementary pictures of the atom to explain different properties. In some respects, the electrons in an atom behave like particles orbiting the nucleus. In others, the electrons behave like waves frozen in position around the nucleus. Such wave patterns, called orbitals (orbital), describe the distribution of individual electrons. The behaviour of an atom is strongly influenced by these orbital properties, and its chemical properties are determined by orbital groupings known as shells.
Atomic model
Basic properties
atomic mass and isotopes (isotope)
The electron
Scientists have known since the late 19th century that the electron has a negative electric charge. The value of this charge was first measured by the American physicist Robert Millikan (Millikan, Robert Andrews) between 1909 and 1910. In Millikan's oil-drop experiment (Millikan oil-drop experiment), he suspended tiny oil drops in a chamber containing an oil mist. By measuring the rate of fall of the oil drops, he was able to determine their weight. Oil drops that had an electric charge (acquired, for example, by friction when moving through the air) could then be slowed down or stopped by applying an electric force. By comparing applied electric force with changes in motion, Millikan was able to determine the electric charge on each drop. After he had measured many drops, he found that the charges on all of them were simple multiples of a single number. This basic unit of charge was the charge on the electron, and the different charges on the oil drops corresponded to those having 2, 3, 4,… extra electrons on them. The charge on the electron is now accepted to be 1.60217733 × 10−19 coulomb. For this work Millikan was awarded the Nobel Prize for Physics in 1923.
The electron has a mass of about 9.1093897 × 10−28 gram. The mass of a proton or neutron is about 1,836 times larger. This explains why the mass of an atom is primarily determined by the mass of the protons and neutrons in the nucleus.
The electron has other intrinsic properties. One of these is called spin. The electron can be pictured as being something like the Earth, spinning around an axis of rotation. In fact, most elementary particles have this property. Unlike the Earth, however, they exist in the subatomic world and are governed by the laws of quantum mechanics. Therefore, these particles cannot spin in any arbitrary way, but only at certain specific rates. These rates can be 1/2, 1, 3/2, 2,… times a basic unit of rotation. Like protons and neutrons, electrons have spin 1/2.
Particles with half-integer spin are called fermions (fermion), for the Italian American physicist Enrico Fermi (Fermi, Enrico), who investigated their properties in the first half of the 20th century. Fermions have one important property that will help explain both the way that electrons are arranged in their orbits and the way that protons and neutrons are arranged inside the nucleus. They are subject to the Pauli exclusion principle (named for the Austrian physicist Wolfgang Pauli (Pauli, Wolfgang)), which states that no two fermions can occupy the same state—for example, the two electrons in a helium atom must have different spin directions if they occupy the same orbit.
Because a spinning electron can be thought of as a moving electric charge, electrons can be thought of as tiny electromagnets (electromagnet). This means that, like any other magnet, an electron will respond to the presence of a magnetic field by twisting. (Think of a compass needle pointing north under the influence of the Earth's magnetic field.) This fact is usually expressed by saying that electrons have a magnetic moment. In physics, magnetic moment relates the strength of a magnetic field to the torque experienced by a magnetic object. Because of their intrinsic spin, electrons have a magnetic moment given by 9.27 × 10−24 joule per tesla.
Orbits and energy (energy state) levels
Unlike planets orbiting the Sun, electrons cannot be at any arbitrary distance from the nucleus; they can exist only in certain specific locations called allowed orbits (see figure—>). This property, first explained by the Danish physicist Niels Bohr (Bohr, Niels) in 1913, is another result of quantum mechanics—specifically, the requirement that the angular momentum of an electron in orbit, like everything else in the quantum world, come in discrete bundles called quanta (quantum).
Electron shells
In the quantum mechanical version of the Bohr atomic model, each of the allowed electron orbits is assigned a quantum number n that runs from 1 (for the orbit closest to the nucleus) to infinity (for orbits very far from the nucleus). All of the orbitals that have the same value of n make up a shell (see figure—>). Inside each shell there may be subshells corresponding to different rates of rotation and orientation of orbitals and the spin directions of the electrons. In general, the farther away from the nucleus a shell is, the more subshells it will have. See the table—>.
Atomic bonds
● Electrons can be transferred from one atom to another.
● Electrons can be shared between neighbouring atoms.
● Electrons can be shared with all atoms in a material.
Conductors and insulators (insulator)
The exact opposite situation obtains in materials, such as plastics (plastic) and ceramics (ceramic composition and properties), in which the electrons are all locked into ionic or covalent bonds. When these kinds of materials are placed between the poles of a battery, no current flows—there are simply no electrons free to move. Such materials are called insulators (insulator).
Magnetic properties
The nucleus
Nuclear forces
The primary constituents of the nucleus are the proton and the neutron, which have approximately equal mass and are much more massive than the electron. For reference, the accepted mass of the proton is 1.6726231 × 10−24 gram, while that of the neutron is 1.6749286 × 10−24 gram. The charge on the proton is equal in magnitude to that on the electron but is opposite in sign, while the neutron has no electrical charge. Both particles have spin 1/2 and are therefore fermions (fermion) and subject to the Pauli exclusion principle. Both also have intrinsic magnetic fields. The magnetic moment of the proton is 1.410606633 × 10−26 joule per tesla, while that of the neutron is 0.9662364 × 10−26 joule per tesla.
Nuclear shell model
Many models describe the way protons and neutrons are arranged inside a nucleus. One of the most successful and simple to understand is the shell model (shell atomic model). In this model the protons and neutrons occupy separate systems of shells, analogous to the shells in which electrons are found outside the nucleus. From light to heavy nuclei, the proton and neutron shells are filled (separately) in much the same way as electron shells are filled in an atom.
When a nucleus forms from protons and neutrons, an interesting regularity can be seen: the mass of the nucleus is slightly less than the sum of the masses of the constituent protons and neutrons. This consistent discrepancy is not large—typically only a fraction of a percent—but it is significant. By Albert Einstein (Einstein, Albert)'s principles of relativity, this small mass deficit can be converted into energy via the equation E = mc2. Thus, in order to break a nucleus into its constituent protons and neutrons, energy must be supplied to make up this mass deficit. The energy corresponding to the mass deficit is called the binding energy of the nucleus, and, as the name suggests, it represents the energy required to tie the nucleus together (see figure—>). The binding energy varies across the periodic table and is at a maximum for iron, which is thus the most stable element.
James Trefil
Development of atomic theory
The concept of the atom that Western scientists accepted in broad outline from the 1600s until about 1900 originated with Greek philosophers in the 5th century BC. Their speculation about a hard, indivisible fundamental particle of nature was replaced slowly by a scientific theory supported by experiment and mathematical deduction. It was 2,000 years before modern physicists realized that the atom is indeed divisible and that it is not hard, solid, or immutable.
The atomic philosophy of the early Greeks
The philosopher Epicurus of Samos (341–270 BC) used Democritus's ideas to try to quiet the fears of superstitious Greeks. According to Epicurus's materialistic philosophy, the entire universe was composed exclusively of atoms and void, and so even the gods were subject to natural laws.
Most of what is known about the atomic philosophy of the early Greeks comes from Aristotle's attacks on it and from a long poem, De rerum natura (“On the Nature of Things”), which the Latin poet and philosopher Titus Lucretius Carus (Lucretius) (c. 95–55 BC) wrote to popularize its ideas. The Greek atomic theory is significant historically and philosophically, but it has no scientific value. It was not based on observations of nature, measurements, tests, or experiments. Instead, the Greeks used mathematics and reason almost exclusively when they wrote about physics. Like the later theologians of the Middle Ages, they wanted an all-encompassing theory to explain the universe, not merely a detailed experimental view of a tiny portion of it. Science constituted only one aspect of their broad philosophical system. Thus, Plato and Aristotle attacked Democritus's atomic theory on philosophical grounds rather than on scientific ones. Plato valued abstract ideas more than the physical world and rejected the notion that attributes such as goodness and beauty were “mechanical manifestations of material atoms.” Where Democritus believed that matter could not move through space without a vacuum and that light was the rapid movement of particles through a void, Aristotle rejected the existence of vacuums because he could not conceive of bodies falling equally fast through a void. Aristotle's conception prevailed in medieval Christian Europe; its science was based on revelation and reason, and the Roman Catholic theologians rejected Democritus as materialistic and atheistic.
The emergence of experimental science
De rerum natura, which was rediscovered in the 15th century, helped fuel a 17th-century debate between orthodox Aristotelian views and the new experimental science. The poem was printed in 1649 and popularized by Pierre Gassendi (Gassendi, Pierre), a French priest who tried to separate Epicurus's atomism from its materialistic background by arguing that God created atoms.
Soon after the Italian scientist Galileo Galilei (Galileo) expressed his belief that vacuums can exist (1638), scientists began studying the properties of air and partial vacuums to test the relative merits of Aristotelian orthodoxy and the atomic theory. The experimental evidence about air was only gradually separated from this philosophical controversy.
The Anglo-Irish chemist Robert Boyle (Boyle, Robert) began his systematic study of air in 1658 after he learned that Otto von Guericke (Guericke, Otto von), a German physicist and engineer, had invented an improved air pump four years earlier. In 1662 Boyle published the first physical law expressed in the form of an equation that describes the functional dependence of two variable quantities. This formulation became known as Boyle's law. From the beginning, Boyle wanted to analyze the elasticity of air quantitatively, not just qualitatively, and to separate the particular experimental problem about air's “spring” from the surrounding philosophical issues. Pouring mercury into the open end of a closed J-shaped tube, Boyle forced the air in the short side of the tube to contract under the pressure of the mercury on top. By doubling the height of the mercury column, he roughly doubled the pressure and halved the volume of air. By tripling the pressure, he cut the volume of air to a third, and so on.
This behaviour can be formulated mathematically in the relation PV = PV′, where P and V are the pressure and volume under one set of conditions and P′ and V′ represent them under different conditions. Boyle's law says that pressure and volume are inversely related for a given quantity of gas. Although it is only approximately true for real gases, Boyle's law is an extremely useful idealization that played an important role in the development of atomic theory.
Soon after his air-pressure experiments, Boyle wrote that all matter is composed of solid particles arranged into molecules (molecule) to give material its different properties. He explained that all things are
In France Boyle's law is called Mariotte's law after the physicist Edme Mariotte (Mariotte, Edme), who discovered the empirical relationship independently in 1676. Mariotte realized that the law holds true only under constant temperatures; otherwise, the volume of gas expands when heated or contracts when cooled.
Forty years later Isaac Newton (Newton, Sir Isaac) expressed a typical 18th-century view of the atom that was similar to that of Democritus, Gassendi, and Boyle. In the last query in his book Opticks (1704), Newton stated:
All these things being considered, it seems probable to me that God in the Beginning form'd Matter in solid, massy, hard, impenetrable, moveable Particles, of such Sizes and Figures, and with such other Properties, and in such Proportion to Space, as most conduced to the End for which he form'd them; and that these primitive Particles being Solids, are incomparably harder than any porous Bodies compounded of them; even so very hard, as never to wear or break in pieces; no ordinary Power being able to divide what God himself made one in the first Creation.
By the end of the 18th century, chemists were just beginning to learn how chemicals combine. In 1794 Joseph-Louis Proust (Proust, Joseph-Louis) of France published his law of definite proportions (definite proportions, law of) (also known as Proust's law). He stated that the components of chemical compounds always combine in the same proportions by weight. For example, Proust found that no matter where he got his samples of the compound copper carbonate, they were composed by weight of five parts copper, four parts oxygen, and one part carbon.
The beginnings of modern atomic theory
Experimental foundation of atomic chemistry
The English chemist and physicist John Dalton (Dalton, John) extended Proust's work and converted the atomic philosophy of the Greeks into a scientific theory between 1803 and 1808. His book A New System of Chemical Philosophy (Part I, 1808; Part II, 1810) was the first application of atomic theory to chemistry. It provided a physical picture of how elements combine to form compounds and a phenomenological reason for believing that atoms exist. His work, together with that of Joseph-Louis Gay-Lussac (Gay-Lussac, Joseph-Louis) of France and Amedeo Avogadro (Avogadro, Amedeo) of Italy, provided the experimental foundation of atomic chemistry.
On the basis of the law of definite proportions, Dalton deduced the law of multiple proportions (multiple proportions, law of), which stated that when two elements form more than one compound by combining in more than one proportion by weight, the weight of one element in one of the compounds is in simple, integer ratios to its weights in the other compounds. For example, Dalton knew that oxygen and carbon can combine to form two different compounds and that carbon dioxide (CO2) contains twice as much oxygen by weight as carbon monoxide (CO). In this case the ratio of oxygen in one compound to the amount of oxygen in the other is the simple integer ratio 2:1. Although Dalton called his theory “modern” to differentiate it from Democritus's philosophy, he retained the Greek term atom to honour the ancients.
Gay-Lussac (Gay-Lussac, Joseph-Louis) soon took the relationship between chemical masses implied by Dalton's atomic theory and expanded it to volumetric relationships of gases. In 1809 he published two observations about gases that have come to be known as Gay-Lussac's law of combining gases. The first part of the law says that when gases combine chemically, they do so in numerically simple volume ratios. Gay-Lussac illustrated this part of his law with three oxides of nitrogen. The compound NO has equal parts of nitrogen and oxygen by volume. Similarly, in the compound N2O the two parts by volume of nitrogen combine with one part of oxygen. He found corresponding volumes of nitrogen and oxygen in NO2. Thus, Gay-Lussac's law relates volumes of the chemical constituents within a compound, unlike Dalton's law of multiple proportions, which relates only one constituent of a compound with the same constituent in other compounds.
The second part of Gay-Lussac's law states that if gases combine to form gases, the volumes of the products are also in simple numerical ratios to the volume of the original gases. This part of the law was illustrated by the combination of carbon monoxide and oxygen to form carbon dioxide. Gay-Lussac noted that the volume of the carbon dioxide is equal to the volume of carbon monoxide and is twice the volume of oxygen. He did not realize, however, that the reason that only half as much oxygen is needed is because the oxygen molecule splits in two to give a single atom to each molecule of carbon monoxide. In his Mémoire sur la combinaison des substances gazeuses, les unes avec les autres (1809; “Memoir on the Combination of Gaseous Substances with Each Other”), Gay-Lussac wrote:
Gay-Lussac's work raised the question of whether atoms differ from molecules and, if so, how many atoms and molecules are in a volume of gas. Amedeo Avogadro (Avogadro, Amedeo), building on Dalton's efforts, solved the puzzle, but his work was ignored for 50 years. In 1811 Avogadro (Avogadro's law) proposed two hypotheses: (1) The atoms of elemental gases may be joined together in molecules rather than existing as separate atoms, as Dalton believed. (2) Equal volumes of gases contain equal numbers of molecules. These hypotheses explained why only half a volume of oxygen is necessary to combine with a volume of carbon monoxide to form carbon dioxide. Each oxygen molecule has two atoms, and each atom of oxygen joins one molecule of carbon monoxide.
Until the early 1860s, however, the allegiance of chemists to another concept espoused by the eminent Swedish chemist Jöns Jacob Berzelius (Berzelius, Jöns Jacob) blocked acceptance of Avogadro's ideas. (Berzelius was influential among chemists because he had determined the atomic weights of many elements extremely accurately.) Berzelius contended incorrectly that all atoms of a similar element repel each other because they have the same electric charge. He thought that only atoms with opposite charges could combine to form molecules.
Because early chemists did not know how many atoms were in a molecule, their chemical notation systems were in a state of chaos by the mid-19th century. Berzelius and his followers, for example, used the general formula MO for the chief metallic oxides, while others assigned the formula used today, M2O. A single formula stood for different substances, depending on the chemist: H2O2 was water or hydrogen peroxide; C2H4 was methane or ethylene. Proponents of the system used today based their chemical notation on an empirical law formulated in 1819 by the French scientists Pierre-Louis Dulong (Dulong, Pierre-Louis) and Alexis-Thérèse Petit concerning the specific heat of elements. According to the Dulong-Petit law (Dulong–Petit law), the specific heat of all elements is the same on a per atom basis. This law, however, was found to have many exceptions and was not fully understood until the development of quantum theory in the 20th century.
To resolve such problems of chemical notation, the Sicilian chemist Stanislao Cannizzaro (Cannizzaro, Stanislao) revived Avogadro's ideas in 1858 and expounded them at the First International Chemical Congress, which met in Karlsruhe, Germany, in 1860. Lothar Meyer (Meyer, Lothar), a noted German chemistry professor, wrote later that when he heard Avogadro's theory at the congress, “It was as though scales fell from my eyes, doubt vanished, and was replaced by a feeling of peaceful certainty.” Within a few years, Avogadro's hypotheses were widely accepted in the world of chemistry.
Atomic weights and the periodic table
As more and more elements were discovered during the 19th century, scientists began to wonder how the physical properties of the elements were related to their atomic weights. During the 1860s several schemes were suggested. The Russian chemist Dmitry Ivanovich Mendeleyev (Mendeleyev, Dmitry Ivanovich) based his system (see photograph—>) on the atomic weights of the elements as determined by Avogadro's theory of diatomic molecules. In his paper of 1869 introducing the periodic law, he credited Cannizzaro for using “unshakeable and indubitable” methods to determine atomic weights.The elements, if arranged according to their atomic weights, show a distinct periodicity of their properties.…Elements exhibiting similarities in their chemical behavior have atomic weights which are approximately equal (as in the case of Pt, Ir, Os) or they possess atomic weights which increase in a uniform manner (as in the case of K, Rb, Cs).
Skipping hydrogen because it is anomalous, Mendeleyev arranged the 63 elements known to exist at the time into six groups according to valence (see figure—>). Valence, which is the combining power of an element, determines the proportions of the elements in a compound. For example, H2O combines oxygen with a valence of 2 and hydrogen with a valence of 1. Recognizing that chemical qualities change gradually as atomic weight increases, Mendeleyev predicted that a new element must exist wherever there was a gap in atomic weights between adjacent elements. His system was thus a research tool and not merely a system of classification. Mendeleyev's periodic table raised an important question, however, for future atomic theory to answer: Where does the pattern of atomic weights come from?
Bernoulli, a Swiss mathematician and scientist, worked out the first quantitative mathematical treatment of the kinetic theory in 1738 by picturing gases as consisting of an enormous number of particles in very fast, chaotic motion. He derived Boyle's law by assuming that gas pressure is caused by the direct impact of particles on the walls of their container. He understood the difference between heat and temperature, realizing that heat makes gas particles move faster and that temperature merely measures the propensity of heat to flow from one body to another. In spite of its accuracy, Bernoulli's theory remained virtually unknown during the 18th century and early 19th century for several reasons. First, chemistry was more popular than physics among scientists of the day, and Bernoulli's theory involved mathematics. Second, Newton's reputation ensured the success of his more-comprehensible theory that gas atoms repel one another. Finally, Joseph Black (Black, Joseph), another noted British scientist, developed the caloric theory of heat, which proposed that heat was an invisible substance permeating matter. At the time, the fact that heat could be transmitted by light seemed a persuasive argument that heat and motion had nothing to do with each other.
Herapath, an English amateur physicist ignored by his contemporaries, published his version of the kinetic theory in 1821. He also derived an empirical relation akin to Boyle's law but did not understand correctly the role of heat and temperature in determining the pressure of a gas.
During the late 1850s, a decade after Waterston had formulated his law, the scientific community was finally ready to accept a kinetic theory of gases. The studies of heat undertaken by the English physicist James Prescott Joule (Joule, James Prescott) during the 1840s had shown that heat is a form of energy. This work, together with the law of the conservation of energy (energy, conservation of) that he helped to establish, had persuaded scientists to discard the caloric theory by the mid-1850s. The caloric theory had required that a substance contain a definite amount of caloric (i.e., a hypothetical weightless fluid) to be turned into heat; however, experiments showed that any amount of heat can be generated in a substance by putting enough energy into it. Thus, there was no point to hypothesizing such a special fluid as caloric.
At first, after the collapse of the caloric theory, physicists had nothing with which to replace it. Joule, however, discovered Herapath's kinetic theory and used it in 1851 to calculate the velocity of hydrogen molecules. Then the German physicist Rudolf Clausius (Clausius, Rudolf) developed the kinetic theory mathematically in 1857, and the scientific world took note. Clausius and two other physicists, the Scot James Clerk Maxwell (Maxwell, James Clerk) and the Austrian Ludwig Eduard Boltzmann (Boltzmann, Ludwig Eduard) (who developed the kinetic theory of gases in the 1860s), introduced sophisticated mathematics into physics for the first time since Newton. In his 1860 paper Illustrations of the Dynamical Theory of Gases, Maxwell used probability theory to produce his famous distribution function for the velocities of gas molecules. Employing Newtonian laws of mechanics, he also provided a mathematical basis for Avogadro's theory. Maxwell, Clausius, and Boltzmann assumed that gas particles were in constant motion, that they were tiny compared with their space, and that their interactions were very brief. They then related the motion of the particles to pressure, volume, and temperature. Interestingly, none of the three committed himself on the nature of the particles.
Studies of the properties of atoms
Size of atoms
The first modern estimates of the size of atoms and the numbers of atoms in a given volume were made by the German chemist Joseph Loschmidt (Loschmidt, Joseph) in 1865. Loschmidt used the results of kinetic theory and some rough estimates to do his calculation. The size of the atoms and the distance between them in the gaseous state are related both to the contraction of gas upon liquefaction and to the mean free path traveled by molecules in a gas. The mean free path, in turn, can be found from the thermal conductivity and diffusion rates in the gas. Loschmidt calculated the size of the atom and the spacing between atoms by finding a solution common to these relationships. His result for Avogadro's number is remarkably close to the present accepted value of about 6.022 × 1023. The precise definition of Avogadro's number is the number of atoms in 12 grams of the carbon isotope C-12. Loschmidt's result for the diameter of an atom was approximately 10−8 cm.
Much later, in 1908, the French physicist Jean Perrin (Perrin, Jean) used Brownian motion to determine Avogadro's number. Brownian motion, first observed in 1827 by the Scottish botanist Robert Brown (Brown, Robert), is the continuous movement of tiny particles suspended in water. Their movement is caused by the thermal motion of water molecules bumping into the particles. Perrin's argument for determining Avogadro's number makes an analogy between particles in the liquid and molecules in the atmosphere. The thinning of air at high altitudes depends on the balance between the gravitational force pulling the molecules down and their thermal motion forcing them up. The relationship between the weight of the particles and the height of the atmosphere would be the same for Brownian particles suspended in water. Perrin counted particles of gum mastic at different heights in his water sample and inferred the mass of atoms from the rate of decrease. He then divided the result into the molar weight of atoms to determine Avogadro's number. After Perrin, few scientists could disbelieve the existence of atoms.
Electric properties of atoms
While atomic theory was set back by the failure of scientists to accept simple physical ideas like the diatomic molecule and the kinetic theory of gases, it was also delayed by the preoccupation of physicists with mechanics for almost 200 years, from Newton to the 20th century. Nevertheless, several 19th-century investigators, working in the relatively ignored fields of electricity, magnetism, and optics, provided important clues about the interior of the atom. The studies in electrodynamics made by the English physicist Michael Faraday (Faraday, Michael) and those of Maxwell indicated for the first time that something existed apart from palpable matter, and data obtained by Gustav Robert Kirchhoff (Kirchhoff, Gustav Robert) of Germany about elemental spectral lines raised questions that would be answered only in the 20th century by quantum mechanics.
More significantly, Faraday's work was the first to imply the electrical nature of matter and the existence of subatomic particles and a fundamental unit of charge. Faraday wrote:
Faraday did not, however, conclude that atoms cause electricity.
Light and spectral lines
In 1865 Maxwell (Maxwell, James Clerk) unified the laws of electricity and magnetism in his publication A Dynamical Theory of the Electromagnetic Field. In this paper he concluded that light is an electromagnetic wave. His theory was confirmed by the German physicist Heinrich Hertz (Hertz, Heinrich), who produced radio waves with sparks in 1887. With light understood as an electromagnetic wave, Maxwell's theory could be applied to the emission of light from atoms. The theory failed, however, to describe spectral lines (spectral line series) and the fact that atoms do not lose all their energy when they radiate light. The problem was not with Maxwell's theory of light itself but rather with its description of the oscillating electron currents generating light. Only quantum mechanics could explain this behaviour (see below The laws of quantum mechanics (atom)).
By far the richest clues about the structure of the atom came from spectral line series. Mounting a particularly fine prism on a telescope, the German physicist and optician Joseph von Fraunhofer (Fraunhofer, Joseph von) had discovered between 1814 and 1824 hundreds of dark lines in the spectrum of the Sun. He labeled the most prominent of these lines with the letters A through G. Together they are now called Fraunhofer lines (see figure—>). A generation later Kirchhoff (Kirchhoff, Gustav Robert) heated different elements to incandescence in order to study the different coloured vapours emitted. Observing the vapours through a spectroscope (spectroscopy), he discovered that each element has a unique and characteristic pattern of spectral lines. Each element produces the same set of identifying lines, even when it is combined chemically with other elements. In 1859 Kirchhoff and the German chemist Robert Wilhelm Bunsen (Bunsen, Robert Wilhelm) discovered two new elements— cesium and rubidium—by first observing their spectral lines.
Johann Jakob Balmer (Balmer, Johann Jakob), a Swiss secondary-school teacher with a penchant for numerology, studied hydrogen's spectral lines (see photograph—>) and found a constant relationship between the wavelengths of the element's four visible lines. In 1885 he published a generalized mathematical formula for all the lines of hydrogen. The Swedish physicist Johannes Rydberg (Rydberg, Johannes Robert) extended Balmer's work in 1890 and found a general rule applicable to many elements. Soon more series were discovered elsewhere in the spectrum of hydrogen and in the spectra of other elements as well. Stated in terms of the frequency of the light rather than its wavelength, the formula may be expressed:
Discovery of electrons
During the 1880s and '90s scientists searched cathode rays (cathode ray) for the carrier of the electrical properties in matter. Their work culminated in the discovery by English physicist J.J. Thomson (Thomson, Sir J.J.) of the electron in 1897. The existence of the electron showed that the 2,000-year-old conception of the atom as a homogeneous particle was wrong and that in fact the atom has a complex structure.
Cathode-ray studies began in 1854 when Heinrich Geissler (Geissler, Heinrich), a glassblower and technical assistant to the German physicist Julius Plücker (Plücker, Julius), improved the vacuum tube (electron tube). Plücker discovered cathode rays in 1858 by sealing two electrodes inside the tube, evacuating the air, and forcing electric current between the electrodes. He found a green glow on the wall of his glass tube and attributed it to rays emanating from the cathode. In 1869, with better vacuums, Plücker's pupil Johann W. Hittorf (Hittorf, Johann Wilhelm) saw a shadow cast by an object placed in front of the cathode. The shadow proved that the cathode rays originated from the cathode. The English physicist and chemist William Crookes (Crookes, Sir William) investigated cathode rays in 1879 and found that they were bent by a magnetic field; the direction of deflection suggested that they were negatively charged particles. As the luminescence did not depend on what gas had been in the vacuum or what metal the electrodes were made of, he surmised that the rays were a property of the electric current itself. As a result of Crookes's work, cathode rays were widely studied, and the tubes came to be called Crookes tubes.
Although Crookes believed that the particles were electrified charged particles, his work did not settle the issue of whether cathode rays were particles or radiation similar to light. By the late 1880s the controversy over the nature of cathode rays had divided the physics community into two camps. Most French and British physicists, influenced by Crookes, thought that cathode rays were electrically charged particles because they were affected by magnets. Most German physicists, on the other hand, believed that the rays were waves because they traveled in straight lines and were unaffected by gravity. A crucial test of the nature of the cathode rays was how they would be affected by electric fields (electric field). Heinrich Hertz (Hertz, Heinrich), the aforementioned German physicist, reported that the cathode rays were not deflected when they passed between two oppositely charged plates in an 1892 experiment. In England J.J. Thomson thought Hertz's vacuum might have been faulty and that residual gas might have reduced the effect of the electric field on the cathode rays.
In 1909 the American physicist Robert Andrews Millikan (Millikan, Robert Andrews) greatly improved a method employed by Thomson for measuring the electron charge directly. In Millikan's oil-drop experiment (Millikan oil-drop experiment), he produced microscopic oil (Millikan oil-drop experiment) droplets and observed them falling in the space between two electrically charged plates. Some of the droplets became charged and could be suspended by a delicate adjustment of the electric field. Millikan knew the weight of the droplets from their rate of fall when the electric field was turned off. From the balance of the gravitational and electrical forces, he could determine the charge on the droplets. All the measured charges were integral multiples of a quantity that in contemporary units is 1.602 × 10−19 coulomb. Millikan's electron-charge experiment was the first to detect and measure the effect of an individual subatomic particle. Besides confirming the particulate nature of electricity, his experiment also supported previous determinations of Avogadro's number. Avogadro's number times the unit of charge gives Faraday's constant, the amount of charge required to electrolyze one mole of a chemical ion.
Identification of positive ions
Francis William Aston (Aston, Francis William), an English physicist, improved Thomson's technique when he developed the mass spectrograph (mass spectrometry) in 1919. This device spread out the beam of positive ions into a “mass spectrum” of lines similar to the way light is separated into a spectrum. Aston analyzed about 50 elements over the next six years and discovered that most have isotopes.
Discovery of radioactivity
Like Thomson's discovery of the electron, the discovery of radioactivity in uranium by the French physicist Henri Becquerel (Becquerel, Henri) in 1896 forced scientists to radically change their ideas about atomic structure. Radioactivity demonstrated that the atom was neither indivisible nor immutable. Instead of serving merely as an inert matrix for electrons, the atom could change form and emit an enormous amount of energy. Furthermore, radioactivity itself became an important tool for revealing the interior of the atom.
The German physicist Wilhelm Conrad Röntgen (Röntgen, Wilhelm Conrad) had discovered X-rays (X-ray) in 1895, and Becquerel thought they might be related to fluorescence and phosphorescence, processes in which substances absorb and emit energy as light. In the course of his investigations, Becquerel stored some photographic plates and uranium salts in a desk drawer. Expecting to find the plates only lightly fogged, he developed them and was surprised to find sharp images of the salts. He then began experiments that showed that uranium salts emit a penetrating radiation independent of external influences. Becquerel also demonstrated that the radiation could discharge electrified bodies. In this case discharge means the removal of electric charge, and it is now understood that the radiation, by ionizing molecules of air, allows the air to conduct an electric current. Early studies of radioactivity relied on measuring ionization power (see figure—>) or on observing the effects of radiation on photographic plates.
In 1898 the French physicists Pierre (Curie, Pierre) and Marie Curie (Curie, Marie) discovered the strongly radioactive elements polonium and radium, which occur naturally in uranium minerals. Marie coined the term radioactivity for the spontaneous emission of ionizing, penetrating rays by certain atoms.
Experiments conducted by the British physicist Ernest Rutherford (Rutherford, Ernest, Baron Rutherford of Nelson, of Cambridge) in 1899 showed that radioactive substances emit more than one kind of radiation. It was determined that part of the radiation is 100 times more penetrating than the rest and can pass through aluminum foil one-fiftieth of a millimetre thick. Rutherford named the less-penetrating emanations alpha (alpha particle) rays and the more-powerful ones beta (beta particle) rays, after the first two letters of the Greek alphabet. Investigators who in 1899 found that beta rays were deflected by a magnetic field concluded that they are negatively charged particles similar to cathode rays. In 1903 Rutherford found that alpha rays were deflected slightly in the opposite direction, showing that they are massive, positively charged particles. Much later Rutherford proved that alpha rays are nuclei of helium atoms by collecting the rays in an evacuated tube and detecting the buildup of helium gas over several days.
A third kind of radiation was identified by the French chemist Paul Villard in 1900. Designated as the gamma ray, it is not deflected by magnets and is much more penetrating than alpha particles (alpha particle). Gamma rays were later shown to be a form of electromagnetic radiation, like light or X-rays, but with much shorter wavelengths. Because of these shorter wavelengths, gamma rays have higher frequencies and are even more penetrating than X-rays.
In 1902, while studying the radioactivity of thorium, Rutherford and the English chemist Frederick Soddy (Soddy, Frederick) discovered that radioactivity was associated with changes inside the atom that transformed thorium into a different element. They found that thorium continually generates a chemically different substance that is intensely radioactive. The radioactivity eventually makes the new element disappear. Watching the process, Rutherford and Soddy formulated the exponential decay law (see decay constant), which states that a fixed fraction of the element will decay in each unit of time. For example, half (half-life) of the thorium product decays in four days, half the remaining sample in the next four days, and so on.
Models of atomic structure
Many physicists distrusted the Rutherford atomic model because it was difficult to reconcile with the chemical behaviour of atoms (see figure—>). The model suggested that the charge on the nucleus was the most important characteristic of the atom, determining its structure. On the other hand, Mendeleyev's periodic table of the elements had been organized according to the atomic masses of the elements, implying that the mass was responsible for the structure and chemical behaviour of atoms.
Moseley's X-ray studies
Henry Gwyn Jeffreys Moseley (Moseley, Henry Gwyn Jeffreys), a young English physicist killed in World War I, confirmed that the positive charge on the nucleus revealed more about the fundamental structure of the atom than Mendeleyev's atomic mass. Moseley studied the spectral lines emitted by heavy elements in the X-ray region of the electromagnetic spectrum. He built on the work done by several other British physicists—Charles Glover Barkla (Barkla, Charles Glover), who had studied X-rays produced by the impact of electrons on metal plates, and William Bragg (Bragg, Sir William) and his son Lawrence (Bragg, Sir Lawrence), who had developed a precise method of using crystals (crystal) to reflect X-rays and measure their wavelength by diffraction. Moseley applied their method systematically to measure the spectra of X-rays produced by many elements.
Moseley found that each element radiates X-rays of a different and characteristic wavelength. The wavelength and frequency vary in a regular pattern according to the charge on the nucleus. He called this charge the atomic number. In his first experiments, conducted in 1913, Moseley used what was called the K series of X-rays to study the elements up to zinc. The following year he extended this work using another series of X-rays, the L series. Moseley was conducting his research at the same time that the Danish theoretical physicist Niels Bohr (Bohr, Niels) was developing his quantum shell model of the atom. The two conferred and shared data as their work progressed, and Moseley framed his equation in terms of Bohr's theory by identifying the K series of X-rays with the most-bound shell in Bohr's theory, the N = 1 shell, and identifying the L series of X-rays with the next shell, N = 2.
Bohr's shell model (Bohr atomic model)
Bohr's starting point was to realize that classical mechanics by itself could never explain the atom's stability. A stable atom has a certain size so that any equation describing it must contain some fundamental constant or combination of constants with a dimension of length. The classical fundamental constants—namely, the charges and the masses of the electron and the nucleus—cannot be combined to make a length. Bohr noticed, however, that the quantum constant formulated by the German physicist Max Planck (Planck, Max) has dimensions which, when combined with the mass and charge of the electron, produce a measure of length. Numerically, the measure is close to the known size of atoms. This encouraged Bohr to use Planck's constant in searching for a theory of the atom.
Planck had introduced his constant in 1900 in a formula explaining the light radiation emitted from heated bodies. According to classical theory, comparable amounts of light energy should be produced at all frequencies. This is not only contrary to observation but also implies the absurd result that the total energy radiated by a heated body should be infinite. Planck postulated that energy can only be emitted or absorbed in discrete amounts, which he called quanta (Latin for “how much”). The energy quantum is related to the frequency of the light by a new fundamental constant, h. When a body is heated, its radiant energy in a particular frequency range is, according to classical theory, proportional to the temperature of the body. With Planck's hypothesis, however, the radiation can be emitted only in quantum amounts of energy. If the radiant energy is less than the quantum of energy, the amount of light in that frequency range will be reduced. Planck's formula correctly describes radiation from heated bodies. Planck's constant has the dimensions of action, which may be expressed as units of energy multiplied by time, units of momentum multiplied by length, or units of angular momentum. For example, Planck's constant can be written as h = 6.6 × 10−34 joule∙seconds.
In 1905 Einstein extended Planck's hypothesis by proposing that the radiation itself can carry energy only in quanta. According to Einstein, the energy (E) of the quantum is related to the frequency (ν) of the light by Planck's constant in the formula E = hν. Using Planck's constant, Bohr obtained an accurate formula for the energy levels of the hydrogen atom. He postulated that the angular momentum of the electron is quantized—i.e., it can have only discrete values. He assumed that otherwise electrons obey the laws of classical mechanics by traveling around the nucleus in circular orbits. Because of the quantization, the electron orbits have fixed sizes and energies. The orbits are labeled by an integer, the quantum number n. In Bohr's model, radius an of the orbit n is given by the formula an = h2n2ε02, where ε0 is the electric constant. As Bohr had noticed, the radius of the n = 1 orbit is approximately the same size as an atom.
The usefulness of Bohr's theory extends beyond the hydrogen atom. Bohr himself noted that the formula also applies to the singly ionized helium atom, which, like hydrogen, has a single electron. The nucleus of the helium atom has twice the charge of the hydrogen nucleus, however. In Bohr's formula the charge of the electron is raised to the fourth power. Two of those powers stem from the charge on the nucleus; the other two come from the charge on the electron itself. Bohr modified his formula for the hydrogen atom to fit the helium atom by doubling the charge on the nucleus. Moseley applied Bohr's formula with an arbitrary atomic charge Z to explain the K- and L-series X-ray spectra of heavier atoms. The German physicists James Franck (Franck, James) and Gustav Hertz (Hertz, Gustav) confirmed the existence of quantum states in atoms in experiments reported in 1914. They made atoms absorb energy by bombarding them with electrons. The atoms would only absorb discrete amounts of energy from the electron beam. When the energy of an electron was below the threshold for producing an excited state, the atom would not absorb any energy.
Bohr's theory had major drawbacks, however. Except for the spectra of X-rays in the K and L series, it could not explain properties of atoms having more than one electron. The binding energy of the helium atom, which has two electrons, was not understood until the development of quantum mechanics. Several features of the spectrum were inexplicable even in the hydrogen atom (see figure—>). High-resolution spectroscopy shows that the individual spectral lines of hydrogen are divided into several closely spaced fine lines. In a magnetic field the lines split even farther apart. The German physicist Arnold Sommerfeld (Sommerfeld, Arnold) modified Bohr's theory by quantizing the shapes and orientations of orbits to introduce additional energy levels corresponding to the fine spectral lines.
Spectra in magnetic fields displayed additional splittings that showed that the description of the electrons in atoms was still incomplete. In 1925 Samuel Abraham Goudsmit (Goudsmit, Samuel Abraham) and George Eugene Uhlenbeck (Uhlenbeck, George Eugene), two graduate students in physics at the University of Leiden in The Netherlands, added a quantum number to account for the division of some spectral lines into more subsidiary lines than can be explained with the original quantum numbers. Goudsmit and Uhlenbeck postulated that an electron has an internal spinning motion and that the corresponding angular momentum is one-half of the orbital angular momentum quantum. Independently, the Austrian-born physicist Wolfgang Pauli (Pauli, Wolfgang) also suggested adding a two-valued quantum number for electrons, but for different reasons. He needed this additional quantum number to formulate his exclusion principle (Pauli exclusion principle), which serves as the atomic basis of the periodic table and the chemical behaviour of the elements. According to the Pauli exclusion principle, one electron at most can occupy an orbit, taking into account all the quantum numbers. Pauli was led to this principle by the observation that an alkali metal atom in a magnetic field has a number of orbits in the shell equal to the number of electrons that must be added to make the next noble gas. These numbers are twice the number of orbits available if the angular momentum and its orientation are considered alone.
The laws of quantum mechanics
The duality between the wave and particle (wave-particle duality) nature of light was highlighted by the American physicist Arthur Holly Compton (Compton, Arthur Holly) in an X-ray scattering experiment conducted in 1922. Compton sent a beam of X-rays through a target material and observed that a small part of the beam was deflected off to the sides at various angles. He found that the scattered X-rays had longer wavelengths than the original beam; the change could be explained only by assuming that the X-rays scattered from the electrons in the target as if the X-rays were particles with discrete amounts of energy and momentum (see figure—>). When X-rays are scattered, their momentum is partially transferred to the electrons. The recoil electron takes some energy from an X-ray, and as a result the X-ray frequency is shifted. Both the discrete amount of momentum and the frequency shift of the light scattering are completely at variance with classical electromagnetic theory, but they are explained by Einstein's quantum formula.
Broglie's conception was an inspired one, but at the time it had no empirical or theoretical foundation. The Austrian physicist Erwin Schrödinger (Schrödinger, Erwin) had to supply the theory.
Schrödinger's wave equation (Schrödinger equation)
Schrödinger postulated that the electrons in an atom should be treated like the waves on the drumhead. The different energy levels of atoms are identified with the simple vibrational modes of the wave equation. The equation is solved to find these modes, and then the energy of an electron is obtained from the frequency of the mode and from Einstein's quantum formula, E = hν. Schrödinger's wave equation gives the same energies as Bohr's original formula but with a much more-precise description of an electron in an atom (see figure—>). The lowest energy level of the hydrogen atom, called the ground state, is analogous to the motion in the lowest vibrational mode of the drumhead. In the atom the electron wave is uniform in all directions from the nucleus, is peaked at the centre of the atom, and has the same phase everywhere. Higher energy levels in the atom have waves that are peaked at greater distances from the nucleus. Like the vibrations in the drumhead, the waves have peaks and nodes that may form a complex shape. The different shapes of the wave pattern are related to the quantum numbers of the energy levels, including the quantum numbers for angular momentum and its orientation.
The year before Schrödinger produced his wave theory, the German physicist Werner Heisenberg (Heisenberg, Werner) published a mathematically equivalent system to describe energy levels and their transitions. In Heisenberg's method, properties of atoms are described by arrays of numbers called matrices (matrix), which are combined with special rules of multiplication. Today physicists use both wave functions and matrices, depending on the application. Schrödinger's picture is more useful for describing continuous electron distributions because the wave function can be more easily visualized. Matrix methods are more useful for numerical analysis calculations with computers and for systems that can be described in terms of a finite number of states, such as the spin states of the electron.
In 1929 the Norwegian physicist Egil Hylleraas applied the Schrödinger equation to the helium atom with its two electrons. He obtained only an approximate solution, but his energy calculation was quite accurate. With Hylleraas's explanation of the two-electron atom, physicists realized that the Schrödinger equation could be a powerful mathematical tool for describing nature on the atomic level, even if exact solutions could not be obtained.
Antiparticles (antiparticle) and the electron's spin
The English physicist Paul Dirac (Dirac, P.A.M.) introduced a new equation for the electron in 1928. Because the Schrödinger equation does not satisfy the principles of relativity, it can be used to describe only those phenomena in which the particles move much more slowly than the velocity of light. In order to satisfy the conditions of relativity, Dirac was forced to postulate that the electron would have a particular form of wave function with four independent components, some of which describe the electron's spin. Thus, from the very beginning, the Dirac theory incorporated the electron's spin properties. The remaining components allowed additional states of the electron that had not yet been observed. Dirac interpreted them as antiparticles (antiparticle), with a charge opposite to that of electrons (see animation—>). The discovery of the positron in 1932 by the American physicist Carl David Anderson (Anderson, Carl David) proved the existence of antiparticles and was a triumph for Dirac's theory.
Advances in nuclear and subatomic physics
The 1920s witnessed further advances in nuclear physics with Rutherford's discovery of induced radioactivity. Bombardment of light nuclei by alpha particles produced new radioactive nuclei. In 1928 the Russian-born American physicist George Gamow (Gamow, George) explained the lifetimes in alpha radioactivity using the Schrödinger equation. His explanation used a property of quantum mechanics that allows particles to “tunnel” through regions where classical physics would forbid them to be.
Structure of the nucleus
The constitution of the nucleus was poorly understood at the time because the only known particles were the electron and the proton. It had been established that nuclei are typically about twice as heavy as can be accounted for by protons alone. A consistent theory was impossible until the English physicist James Chadwick (Chadwick, Sir James) discovered the neutron in 1932. He found that alpha particles reacted with beryllium nuclei to eject neutral particles with nearly the same mass as protons. Almost all nuclear phenomena can be understood in terms of a nucleus composed of neutrons (neutron) and protons. Surprisingly, the neutrons and protons in the nucleus move to a large extent in orbitals as though their wave functions were independent of one another. Each neutron or proton orbital is described by a stationary wave pattern with peaks and nodes and angular momentum quantum numbers. The theory of the nucleus based on these orbitals is called the shell nuclear model. It was introduced independently in 1948 by Maria Goeppert Mayer (Mayer, Maria Goeppert) of the United States and Johannes Hans Daniel Jensen (Jensen, J. Hans D.) of West Germany, and it developed in succeeding decades into a comprehensive theory of the nucleus.
nuclear fission was discovered by the German chemists Otto Hahn (Hahn, Otto) and Fritz Strassmann (Strassmann, Fritz) in 1938 during the course of experiments initiated and explained by Austrian physicist Lise Meitner (Meitner, Lise). In fission a uranium nucleus captures a neutron and gains enough energy to trigger the inherent instability of the nucleus, which splits into two lighter nuclei of roughly equal size. The fission process releases more neutrons, which can be used to produce further fissions. The first nuclear reactor, a device designed to permit controlled fission chain reactions, was constructed at the University of Chicago (Chicago, University of) under Fermi's direction, and the first self-sustaining chain reaction was achieved in this reactor in 1942. In 1945 American scientists produced the first fission bomb, also called an atomic bomb, which used uncontrolled fission reactions in either uranium or the artificial element plutonium. In 1952 American scientists used a fission explosion to ignite a fusion reaction in which isotopes of hydrogen combined thermally into heavier helium nuclei. This was the first thermonuclear bomb, also called an H-bomb, a weapon that can release hundreds or thousands of times more energy than a fission bomb.
Quantum field theory and the standard model
The theoretical impasse was broken as a result of a measurement carried out in 1946 and 1947 by the American physicist Willis Eugene Lamb, Jr (Lamb, Willis Eugene, Jr.). Using microwave techniques developed during World War II, he showed that the hydrogen spectrum is actually about one-tenth of one percent different from Dirac's theoretical picture. Later the German-born American physicist Polykarp Kusch (Kusch, Polykarp) found a similar anomaly in the size of the magnetic moment of the electron. Lamb's results were announced at a famous Shelter Island Conference held in the United States in 1947; the German-born American physicist Hans Bethe (Bethe, Hans) and others realized that the so-called Lamb shift was probably caused by electrons and field quanta that may be created from the vacuum. The previous mathematical difficulties were overcome by Richard Feynman (Feynman, Richard P.), Julian Schwinger (Schwinger, Julian Seymour), and Tomonaga Shin'ichirō, who shared the 1965 Nobel Prize for Physics, and Freeman Dyson (Dyson, Freeman), who showed that their various approaches were mathematically identical. The new theory, called quantum electrodynamics, was found to explain all the measurements to very high precision. Apparently, quantum electrodynamics provides a complete theory of how electrons behave under electromagnetism.
Beginning in the 1960s, similarities were found between the weak force and electromagnetism. Sheldon Glashow (Glashow, Sheldon Lee), Abdus Salam (Salam, Abdus), and Steven Weinberg (Weinberg, Steven) combined the two forces in the electroweak theory, for which they shared the Nobel Prize for Physics in 1979. In addition to the photon, three field quanta were predicted as additional carriers of the force—the particle (W particle), the particle (Z particle), and the Higgs particle. The discoveries of the W and Z particles in 1983, with correctly predicted masses, established the validity of the electroweak theory. Physicists are still searching for the much heavier Higgs particle, whose exact mass is not specified by the theory.
In all, hundreds of subatomic particles have been discovered since the first unstable particle, the muon, was identified in cosmic rays (cosmic ray) in the 1930s. By the 1960s patterns emerged in the properties and relationships among subatomic particles that led to the quark theory. Combining the electroweak theory and the quark theory, a theoretical framework called the standard model was constructed; it includes all known particles and field quanta. In the Standard Model there are two broad categories of particles, the leptons (lepton) and the quarks. Leptons include electrons, muons, and neutrinos, and, aside from gravity, they interact only with the electroweak force.
George F. Bertsch Sharon Bertsch McGrayne
Additional Reading
General history
Hans Christian von Baeyer, Taming the Atom: The Emergence of the Visible Microworld (1992, reissued 2000), is an engaging and clearly written history of the atom, from the Greeks to modern laboratories. James Trefil, From Atoms to Quarks (1980, reissued 1994), is a history of the quest for the ultimate nature of matter. Andrew G. van Melsen, From Atomos to Atoms: The History of the Concept Atom, trans. from the Dutch by Henry J. Koren (1952, reissued 2004; originally published 1949), is an exhaustive study of the history of the atom from a philosophical point of view. Steven Weinberg, The Discovery of Subatomic Particles, rev. ed. (2003), is a concise historical exposition emphasizing 19th- and early 20th-century discoveries. Helge Kragh, Quantum Generations: A History of Physics in the Twentieth Century (1999, reissued 2002), is a detailed one-volume history of physics in the 20th century. Henry A. Boorse and Lloyd Motz (eds.), The World of the Atom, 2 vol. (1966), containing reprints of many original papers influential in the development of thought on the atom, is highly recommended for its lively and thorough commentary.
Atomic components and properties
Raymond A. Serway, Clement J. Moses, and Curt A. Moyer, Modern Physics, 3rd ed. (2005), is a standard introductory textbook. Linus Pauling, The Nature of the Chemical Bond and the Structure of Molecules and Crystals: An Introduction to Modern Structural Chemistry, 3rd ed. (1960, reissued 1993), gives a classic account of the author's valence bond theory. Roger L. DeKock and Harry B. Gray, Chemical Structure and Bonding, 2nd ed. (1989), is an excellent introductory textbook for chemistry undergraduates. Robert Eisberg and Robert Resnick, Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles, 2nd ed. (1985), is for readers with a calculus background but no previous quantum mechanics. Bogdan Povh et al., Particles and Nuclei: An Introduction to the Physical Concepts, trans. from the German by Martin Lavelle, 4th ed. (2004), covers nuclear properties, their reactions, and the basics of the Standard Model in more detail, with a minimum of mathematical equations.Sharon Bertsch McGrayne George F. Bertsch James Trefil
* * *
Universalium. 2010.
Look at other dictionaries:
• Atom — См. Атом Термины атомной энергетики. Концерн Росэнергоатом, 2010 … Термины атомной энергетики
• Atom- — Atom … Deutsch Wörterbuch
• Atom — Atom … Deutsch Wörterbuch
• Atom — • Primarily, the smallest particle of matter which can exist Catholic Encyclopedia. Kevin Knight. 2006. Atom Atom † … Catholic encyclopedia
• ATOM — Pour les articles homonymes, voir Atom (homonymie). Atom Extension de fichier … Wikipédia en Français
• Atom — [a to:m], das; s, e: kleinstes, mit chemischen Mitteln nicht weiter zerlegbares Teilchen eines chemischen Grundstoffes: Atome sind elektrisch neutral. Zus.: Wasserstoffatom. * * * Atom 〈n. 11; Chem.〉 1. kleinstes, mit chem. Mitteln nicht mehr… … Universal-Lexikon
• atom — ATÓM, atomi, s.m. 1. Cea mai mică parte dintr un element chimic care mai păstrează însuşirile chimice ale elementului respectiv. ♢ (fiz.; în compusul) Atom gram = greutatea exprimată în grame a masei unui atom. 2. Corpuscul infinit de mic,… … Dicționar Român
• Atom — Sn kleinstes Teilchen std. (15. Jh., Form 19. Jh.) Entlehnung. Entlehnt aus l. atomus f., zunächst mit lateinischer Flexion und maskulinem Genus. Das lateinische Wort wiederum ist entlehnt gr. átomos, einer Substantivierung von gr. átomos… … Etymologisches Wörterbuch der deutschen sprache
• atom — [at′əm] n. [ME attome < OFr atome < L atomus < Gr atomos, uncut, indivisible, atom < a , not + tomos < temnein, to cut: see TOMY] 1. Obs. any of the indivisible particles postulated by philosophers as the basic component of all… … English World dictionary
• Atom — At om, n. [L. atomus, Gr. ?, uncut, indivisible; a priv. + ?, verbal adj. of ? to cut: cf. F. atome. See {Tome}.] 1. (Physics) (a) An ultimate indivisible particle of matter. (b) An ultimate particle of matter not necessarily indivisible; a… … The Collaborative International Dictionary of English
• Atom — Atom, von ἄτoμoν, unteilbar, bezeichnete im Sinne der altgriechischen Philosophen die kleinsten Teilchen der Materie. In den Händen der Chemiker hat sich der Begriff des Atoms dahin umgestaltet, daß es, für jedes… … Lexikon der gesamten Technik
Share the article and excerpts
Direct link
Do a right-click on the link above
and select “Copy Link”
|
51b2ec90495cae88 | Skip to main content
Chemistry LibreTexts
5.6: The Harmonic-Oscillator Wavefunctions involve Hermite Polynomials
• Page ID
• Learning Objectives
For a diatomic molecule, there is only one vibrational mode, so there will be only a single set of vibrational wavefunctions with associated energies for this system. For polytatomic molecules, there will be a set of wavefunctions with associated energy associated with each vibrational mode.
The Hamiltonian operator, the general quantum mechanical operator for energy, includes both a kinetic energy term, \(\hat {T}\), and a potential energy term, \(\hat {V}\).
\[ \hat {H} = \hat {T} + \hat {V} \label {5.6.2}\]
For the free particle and the particle in a box, the potential energy term used in the Hamiltonian was zero. As shown in Equation \(\ref{5.6.2}\), the classical expression for the energy of a harmonic oscillator includes both a kinetic energy term and the harmonic potential energy term. Transforming this equation into the corresponding Hamiltonian operator gives,
\[\hat {H} (q) = \dfrac {1}{2 \mu} \hat {P}^2_q + \dfrac {k}{2} \hat {q}^2 \label {5.6.3}\]
where \(\hat {q}\) is the operator for the length of the normal coordinate, and \(\hat {P}_q\) is the momentum operator associated with the normal coordinate. \(\mu\) is an effective (reduced) mass, and \(k\) is an effective force constant, and these quantities will be different for each of the normal modes (vibrations).
Substituting the definitions for the operators yields
\[\hat {H} (q) = -\dfrac {\hbar ^2}{2\mu} \dfrac {d^2}{dq^2} + \dfrac {k}{2} q^2 \label {5.6.4}\]
since the operator for position or displacement is just the position or displacement. The time-independent Schrödinger Equation then becomes
\[- \dfrac {\hbar ^2}{2\mu} \dfrac {d^2 \psi _v (q)}{dq^2} + \dfrac {k}{2} q^2 \psi _v (q) = E_v \psi _v (q) \label {15.6.5}\]
or upon rearranging
\[ \dfrac {d^2 \psi _v (q)}{dq^2} + \dfrac {2 \mu}{\hbar ^2} \left ( E_v - \dfrac {k}{2} q^2 \right ) \psi _v (q) = 0 \label {15.6.6}\]
This differential equation is not straightforward to solve. Rather than fully develop the details of the solution, we will outline the method used because it represents a common strategy for solving differential equations. The steps taken to solve Equation \(\ref{15.6.6}\) are to simplify the equation by collecting constants in the parameter \(\beta\)
\[ \beta ^2 = \dfrac {\hbar}{\sqrt { \mu k}} \label {15.6.7} \]
and then changing the variable from \(q\) to \(x\) where
\[x = \dfrac {q}{\beta} \label{scale}\]
so that
\[ \dfrac {d^2}{dq^2} = \dfrac {1}{\beta^2} \dfrac {d^2}{dx^2} \label {15.6.8}\]
After substituting Equations \(\ref{15.6.7}\) and \(\ref{15.6.8}\) into Equation \(\ref{15.6.6}\) the differential equation for the harmonic oscillator becomes
\[ \dfrac {d^2 \psi _v (x)}{dx^2} + \left ( \dfrac {2 \mu \beta ^2 E_v}{\hbar ^2} - x^2 \right ) \psi _v (x) = 0 \label {15.6.9}\]
Exercise \(\PageIndex{1}\)
Make the substitutions given in Equations \(\ref{15.6.7}\) and \(\ref{15.6.8}\) into Equation \(\ref{15.6.6}\) to get Equation \(\ref{15.6.9}\).
A common strategy for solving differential equations, which is employed here, is to find a solution that is valid for large values of the variable and then develop the complete solution as a product of this asymptotic solution and a power series. Since the potential energy approaches infinity as \(x\) and the coordinate \(q\) approach infinity, the wavefunction must approach zero. The function that has this property and satisfies the differential equation for large values of \(x\) is the exponential function
\[ \lim_{x \rightarrow \infty} \psi(x) \exp \left ( \dfrac {-x^2}{2} \right ) \label {15.6.10}\]
The general expression for a power series is
\[ \sum _{n=0}^\infty c_n x^n \label {15.6.11}\]
which can be truncated after the first term, after the second term, after the third term, etc. to produce a set of polynomials. There is one polynomial for each value of \(v\) where \(v\) can be equal to any integer value including zero.
\[ \sum _{n=0}^v c_n x^n \label {15.6.12}\]
Each of the truncations of the power series in Equation \(\ref{15.6.12}\) can be multiplied by the exponential function in Equation \(\ref{15.6.10}\) to create a family of valid solutions to the differential equation.
\[\psi _v (x) = \sum _{n=0}^v c_n x^n exp \left ( \dfrac {-x^2}{2} \right ) \label {15.6.13}\]
Exercise \(\PageIndex{2}\)
Write the first four polynomials, \(v=0\) to \(v=1\), \(v=12\), \(v=13\), \(v=14\) for Equation \(\ref{15.6.12}\) and use suitable software to prepare plots of these polynomials. Identify the curves in the plots.
While polynomials in general approach \(∞\) (or \(-∞\)) as \(x\) approaches \(∞\), the decreasing exponential term overpowers the polynomial term so that the overall wavefunction exhibits the desired approach to zero at large values of \(x\) or \(-x\). The exact forms of polynomials that solve Equation \(\ref{15.6.9}\) are the Hermite polynomials, which are standard mathematical functions known from the work of Charles Hermite. The first eight Hermite polynomials, \(H_v(x)\), are given below.
• \(H_0 = 1\)
• \(H_1 = 2x\)
• \(H_2 = -2 + 4x^2\)
• \(H_3 = -12x + 8x^3\)
• \(H_4 = 12 - 48x^2 +16x^4\)
• \(H_5 = 120x - 160x^3 + 32x^5\)
• \(H_6 = -120 + 720x^2 - 480 x^4 + 64x^6\)
• \(H_7 = -1680x + 3360 x^3 - 1344 x^5 + 128 x^7\)
The Hermite polynomials like those in Table \(\PageIndex{1}\) can be produced by using the following generating function
\[ H_v (x) = (-1)^v \exp (x^2) \dfrac {d^v}{dx^v} \exp (-x^2) \label {5.6.14}\]
Generating functions provide a more economical way to obtain sets of functions compared to purchasing books of tables, and they are often more convenient to use in mathematical derivations.
Exercise \(\PageIndex{3}\)
Use the generating formula, Equation \(\ref{5.6.14}\), to verify \(H_3\) in Table \(\PageIndex{1}\). Use the generating formula to produce \(H_8\).
Exercise \(\PageIndex{4}\)
Determine the units of \(β\) and the units of \(x\) in the Hermite polynomials.
Because of the association of the wavefunction with a probability density, it is necessary for the wavefunction to include a normalization constant, \(N_v\).
\[N_v = \dfrac {1}{(2^v v! \sqrt {\pi} )^{1/2}} \label {5.6.15}\]
The final form of the harmonic oscillator wavefunctions is thus
\[ \psi _v (x) = N_v H_v (x) e^{-x^2/2} \label {5.6.16}\]
Alternative and More Common Formulation of Harmonic Oscillator Wavefunctions
The harmonic oscillator wavefunctions are often written in terms of \(Q\), the unscaled displacement coordinate (Equation \(\ref{scale}\)) and a different constant \(\alpha\):
\[ \alpha =1/\sqrt{\beta} = \sqrt{\dfrac{k \mu}{\hbar ^2}} \]
so Equation \(\ref{5.6.16}\) becomes
\[ \psi _v (x) = N_v'' H_v (\sqrt{\alpha} Q) e^{-\alpha Q^2/ 2} \]
with a slightly different normalization constant
\[ N_v'' = \sqrt {\dfrac {1}{2^v v!}} \left(\dfrac{\alpha}{\pi}\right)^{1/4} \]
Exercise \(\PageIndex{5}\)
Compute the normalization factor for \(\psi_v(x)\) where \(v = 0\) and \(v = 4\). What is the purpose of \(N_v\)?
The energy eigenvalues for a quantum mechanical oscillator also are obtained by solving the Schrödinger equation. The energies are restricted to discrete values
\[E_v = \left ( v + \dfrac {1}{2} \right ) \hbar \omega \label {5.6.17}\]
with \(v = 0, 1, 2, 3, \cdots \).
The energies depend both on the quantum number, \(v\), and the oscillator frequency
\[ \omega = \sqrt {\dfrac {k}{\mu}}\]
which in turn depends on the spring constant \(k\) and the reduced mass of the vibration \(\mu\).
Exercise \(\PageIndex{6}\)
Determine the energy for the first ten harmonic oscillator energy levels in terms of \(\hbar \omega\). Sketch an energy level diagram of these energies.
1. What insights do you gain from Equation \(\ref{5.6.17}\), your calculations, and your diagram?
2. Is it possible to have a molecule that is not vibrating?
3. In terms of \(\hbar \omega\), what is the energy of the photon required to cause a transition from one vibrational state to the next higher one?
4. If a transition from energy level \(v = 9\) to \(v = 10\) were observed in a spectrum, where would that spectral line appear relative to the one for the transition from level \(v = 0\) to \(v = 1\)?
5. If a vibrational transition is observed at 3000 cm-1 in an infrared spectrum, what is the value of \(\hbar \omega\) for the normal mode?
6. Identify all the possible meanings of \(ΔE = hν\) and the definition of the frequency, \(ν\), in each case.
The normalized wavefunctions for the first four states of the harmonic oscillator are shown in Figure \(\PageIndex{1}\), and the corresponding probability densities are shown in Figure \(\PageIndex{2}\). You should remember the mathematical and graphical forms of the first few harmonic oscillator wavefunctions, and the correlation of \(v\) with \(E_v\). The number of nodes in the wavefunction will help you to remember these characteristics. Also note that the functions fall off exponentially and that the symmetry alternates. For \(v\) equal to an even number, \(Ψ_v\) is gerade; for v equal to an odd number, \(Ψ_v\) is ungerade.
Figure \(\PageIndex{1}\): The harmonic oscillator wavefunctions describing the four lowest energy states.
Exercise \(\PageIndex{7}\)
Write a few sentences describing and comparing the plots in Figure \(\PageIndex{1}\).
Figure \(\PageIndex{2}\): The probability densities for the four lowest energy states of the harmonic oscillator.
Exercise \(\PageIndex{8}\)
Explain how Figure \(\PageIndex{2}\) is related to Figure \(\PageIndex{1}\). Explain the physical significance of the plots in Figure \(\PageIndex{2}\) in terms of the magnitude of the normal coordinate \(Q\). Couch your discussion in terms of the \(\ce{HCl}\) molecule. How would you describe the location of the atoms in each of the states? How does the oscillator position correspond to the energy of a particular level?
Figure \(\PageIndex{2}\) is simply the wavefunction in Figure \(\PageIndex{1}\) squared. The normal coordinate is the linear combination of the atomic cartesian coordinates. As Q is often in relation to the energy (kinetic and potential), they would be displaced by a certain amount dependent on Q (energy) along with an increase in nodes. This displacement is apparent when comparing the ascending energy levels of each of the wavefunctions. In the n=0 (first) energy state, it is most probable to be found between -2, 2. (in a range of -4, 4) In the second energy state, it is likely to be between -2.5, 2.5 (range -5, 5), third level: (-3,3) (range -6,6), fourth level (-4,4) (range -6,6).
Exercise \(\PageIndex{9}\)
Plot the probability density for energy level 10 of the harmonic oscillator. How many nodes are present? Plot the probability density for energy level 20. Compare the plot for level 20 with that of level 10 and level 1. Compare these quantum mechanical probability distributions to those expected for a classical oscillator. What conclusion can you draw about the probability of the location of the oscillator and the length of a chemical bond in a vibrating molecule? Extend your analysis to include a very high level, like level 50.
In completing Exercise \(\PageIndex{9}\), you should have noticed that as the quantum number increases and becomes very large, the probability distribution approaches that of a classical oscillator. This observation is very general. It was first noticed by Bohr, and is called the Bohr Correspondence Principle. This principle states that classical behavior is approached in the limit of large values for a quantum number. A classical oscillator is most likely to be found in the region of space where its velocity is the smallest. This situation is similar to walking through one room and running through another. In which room do you spend more time? Where is it more likely that you will be found? |
003da5cd00ba3c36 | Course Description
MATH 511 Real Analysis (Gerçel Analiz) (3+0+0)3
Sets, countable sets, topological concepts of the set R, continuous functions, metric spaces, Lebesgue integration, Lp spaces, Riesz-Fischer theorem and Hilbert space. Normed linear spaces, Minkowski inequality, completeness theorem, L∞ space, Egoroff's theorem, Radon-Nykodym theorem, Carathéodory and Hahn Banach theorem. Lebesgue and Lebesgue-Stieltjes measure, Riesz lemma, Fubini and Tonelli theorems.
MATH 512 Complex Analysis (Karmaşık Analiz) (3+0+0)3
The complex number system, metric spaces and topology of C, elementary properties and examples of analytic functions, complex integration, maximum modulus theorem, Cauchy's integral formula, properties of path integrals, conformal mapping. Schwarz-Christoffel transformation.
MATH 513 Functional Analysis (Fonksiyonel Analiz ) (3+0+0)3
Linear spaces, norms, completeness. Linear mappings, continuity. Hahn-Banach theorem, normed linear spaces, Hilbert spaces. Dual spaces. Bounded linear functionals, weak convergence, uniform boundedness, weak and weak* topologies, Stone-Weierstrass theorem. Bounded linear operators: boundedness and continuity, weak and strong convergence. Open-mapping theorem. Closed graph theorem.
MATH 516 Probability (Olasılık) (3+0+0)3
Monotone functions, distribution functions, absolutely continuous and singular distributions. Measure theory, classes of sets, probability measures and their distribution functions. Random variables, expected value, variance, properties of mathematical expectation. Independence, type of convergence, almost sure convergence. Borel-Cantelli lemma, vague convergence, uniform integrability, convergence of moments, law of large numbers, random series, weak law of large numbers, strong law of large numbers. Characteristic function, convolution, uniqueness and inversion, convergence theorems, central limit theorem, basic properties of conditional expectation, conditional independence and Markov property.
MATH 521 Ordinary Differential Equations (Adi Diferansiyel Denklemler) (3+0+0)3
Ordinary differential equations, linear systems, nonlinear systems, existence and uniqueness theorems, continuous dependence on parameters, stability. Boundary value problems, periodic solutions. Operators on Banach spaces, contraction of mappings.
MATH 522 Partial Differential Equations I (Kısmi Diferansiyel Denklemler I) (3+0+0)3
Classification of first order partial differential equations. Linear and nonlinear equations. Geometrical meaning. Monge cones and Monge equation. Characteristics. Cauchy problem. Caustics. Envelopes. Hamilton-Jacobi theory. Elements of symplectic geometry. Some equations of the second order.
MATH 523 Partial Differential Equations II (Kısmi Diferansiyel Denklemler II) (3+0+0)3
Types of second order partial differential equations. Reduction to canonical form: hyperbolic, parabolic and elliptic cases. Equations of the hyperbolic type. D'Alambert formulae. Correctness. The Riemann method and Riemann function. Three dimensional wave equation. The Poisson formula. Correctness of the classical problems. Non-homogenous equations. Cylindrical waves. Point source. The Fourier method. Elliptic equations and classical problems. Harmonic functions and properties. Mean value theorem for a ball. Kelvin theorem. Uniqueness theorems for Dirichlet and Neumann problems. Dirichlet problem for a ball. Green function for Dirichlet and Neumann problem. Spherical functions and their applications.
MATH 527 Numerical Analysis (Sayısal Analiz) (3+0+0)3
Polynomial approximation, Lagrange interpolation, least squares polynomial approximation, spline approximation and interpolation, the fast Fourier transform. Numerical quadrature, Richardson extrapolation, Romberg integration, Gaussian quadrature, adaptive quadrature, Monte Carlo methods for higher dimensional integrals. Direct methods of numerical linear algebra; triangular systems, Gaussian elimination and LU decomposition, pivoting, backward error analysis. Numerical solution of nonlinear systems and optimization; one-point iteration, Newton's method, unconstrained minimization, conjugate gradients.
MATH 528 Numerical Solution of Partial Differential Equations (Kısmi Diferansiyel Denklemlerin Sayısal Çözümü) (3+0+0)3
Finite difference method for parabolic, elliptic and hyperbolic partial differential equations. Constructing the finite difference scheme for the model problems. Convergence, consistency and stability analysis of the numerical scheme. An introduction to the spectral methods: Fourier collocation and Fourier Galerkin methods.
MATH 541 Algebra (Cebir) (3+0+0)3
Groups; isomorphism theorems, group action, simplicity of alternating groups, solvability of p-groups, Sylow theorems, Jordan-Hölder theorem, nilpotent and solvable groups. Rings; ring homomorphisms, Euclidean domains, PIDs, unique factorization, Gauss lemma, irreducibility criteria.
MATH 551 Nonlinear Continuum Mechanics I (Doğrusal Olmayan Sürekli Ortamlar Mekaniği I) (3+0+0)3
Mathematical foundations of continuum mechanics, vectors and tensors, kinematics of deformation, conservation laws.
MATH 552 Nonlinear Continuum Mechanics II (Doğrusal Olmayan Sürekli Ortamlar Mekaniği II) (3+0+0)3
Thermodynamics, constitutive equations of elastic, viscous, and viscoelastic materials, electromagnetic solids.
MATH 554 Perturbation Methods (Tedirgeme Yöntemleri) (3+0+0)3
Matched asymptotic expansions, multiple scales, WKB and homogenization. Applications in ODEs, PDEs, difference equations, and integral equations: boundary or shock layers, nonlinear wave propagation, bifurcation and stability, and resonance.
MATH 561 Topology (Topoloji) (3+0+0)3
Topological structures, open and closed sets, neighborhoods, product, order and subspace topologies, metric topology, accumulation and limit points, convergence, continuous mappings, connectedness and paths, compactness and local compactness, embeddings, separation axioms, normal spaces. Urysohn, Tychonoff and Stone-Čech theorems.
MATH 564 Differential Geometry (Diferansiyel Geometri) (3+0+0)3
Differentiable manifolds, tangent and cotangent spaces, vector fields, Lie bracket, diffeomorphism, the inverse function theorem, submanifolds, hypersurfaces, standart connection of Euclidean spaces, Weingarten and Gauss maps, tensors and differential forms, Lie derivative, Riemannian connection, Riemannian manifolds, Riemannian curvature tensor.
MATH 571 Mathematical Methods in Physics and Engineering (Fizik ve Mühendislikte Matematiksel Yöntemler) (3+0+0)3
Linear operators on finite dimensional vector spaces, canonical forms and functions of matrices, multilinear functions on vector spaces, tensor analysis in R3 and its applications to theory of elasticity, calculus of variations, quasi-linear partial differential equations, separation of variables, well-posed problems. Analytic functions, contour integration, conformal mapping, Banach and Hilbert spaces, expansions in orthogonal functions, classical orthogonal polynomials, integral transforms, applications to partial differential equations, Green's functions, distributions. (For students with a Bachelor of Science degree other than mathematics).
MATH 581-589 Special Topics in Mathematics I-IX (Matematikte Özel Konular I-IX) (3+0+0)3
Study of special topics chosen among the recent technological or theoretical developments in mathematics.
MATH 611 Harmonic Analysis I (Harmonik Analiz I) (3+0+0)3
L^p and weak-L^p spaces, convolution and approximate identities, interpolation, maximal functions. Schwartz class and the Fourier transform, classes of tempered distributions, convolution operators on L^p spaces and multipliers. Fourier coefficients, decay property of Fourier coefficients, pointwise convergence of Fourier series. The conjugate function and convergence in norm, singular integrals, the Hilbert transform and the Riesz transform, homogeneous singular integrals and the method of rotations, the Calderon–Zygmund decomposition and singular integrals, sufficient conditions for L^p boundedness.
MATH 612 Harmonic Analysis II (Harmonik Analiz II) (3+0+0)3
Littlewood-Paley theory, multiplier theorems, applications of Littlewood-Paley theory, the Haar system, conditional expectation, martingales, Riesz potentials, Bessel potentials and fractional integrals, Sobolev spaces, Lipschitz spaces, Hardy spaces, singular integral on function spaces, functions of bounded mean oscilation (BMO), duality between H^1 and BMO, nontangential maximal functions, Carleson measures and the sharp maximal function.
MATH 613 Conformal Mappings (Açı Koruyan Dönüşümler) (3+0+0)3
Harmonic functions, Green's formula, analytic functions, conformal mappings of simply connected domains, Riemann mapping theorem, elliptic functions, conformal mappings of multiple-connected domains, geometrical and analytical approach to conformal mappings. Quasi-conformal mappings and Teichmüller spaces.
MATH 614 Advanced Functional Analysis ( İleri Fonksiyonel Analiz) (3+0+0)3
Banach algebras, elementary spectral theory in Banach spaces. Commutative Banach algebras and Gelfand theory. Integral operators. Compact operators and spectral theory. Examples of compact operators, positive compact operators. Compact symmetric operators in Hilbert spaces. Spectral theory of symmetric, normal, unitary and self-adjoint operators.
MATH 615 Functional Analysis and Applications (Fonksiyonel Analiz ve Uygulamaları) (3+0+0)3
Normed spaces. Linear operators, the contraction mapping. Fixed point theorems, spectral theory. Sturm-Liouville systems. Variational methods, applications to differential equations. Linear and nonlinear elliptic partial differential equations.
MATH 617 Theory of Stochastic Processes I (Stokastik Süreçler Kuramı I) (3+0+0)3
Brownian motion, its definition and basic properties, martingale, Doob's inequality, stopping times, the optional stopping theorem, convergence and regularity, some martingale applications, Markov properties of Brownian motion, Poisson process, path properties of Brownian motion, continuous semimartingales, square integrable martingales, quadratic variation, Doob-Meyer decomposition, stochastic integrals and extensions, Ito formula, Levy theorem, time change of martingales, martingale representation, Burkholder-Davis-Gundy inequalities and Stratonovich integrals.
MATH 618 Theory of Stochastic Processes II (Stokastik Süreçler Kuramı II) (3+0+0)3
Ito's formula, the Girsanov theorem, Markov processes, transition properties of Markov processes, examples of Markov processes, canonical process and shift operator, extension of filtration, strong Markov property, transience and recurrence. Additive functions, continuity, harmonic functions, theory of general processes, predictable and optional processes, hitting times, processes with jumps, martingale decomposition, stochastic integrals. Ito's formula for jump processes, reduction theorem, semimartingales, the Girsanov theorem for jump processes. Stochastic differential equations, pathwise solutions and one-dimensional stochastic differential equations.
MATH 619 Advanced Differential Geometry (İleri Diferansiyel Geometri) (3+0+0)3
Tensor fields, differential forms and exterior derivative. Connections. Riemannian metric, Riemannian manifold, covariant derivative, parallel translation, geodesics, exponential mapping and normal coordinates. Curvature tensors, sectional curvature, Ricci curvature and scalar curvature. Space forms. Conformal changes of Riemannian metric. Riemannian submanifolds, induced connection, second fundamental form. Equations of Gauss, Codazzi and Ricci. Cartan structure equations.
MATH 653 Nonlinear Elasticity (Doğrusal Olmayan Esneklik) (3+0+0)3
Review of governing equations, linearization and stability, constitutive inequalities, large elastic deformations, exact solution of special problems, controllable deformations of incompressible materials, initial stress problems, elastic stability, nonlinear string and rod theories, membrane theory, fiber-reinforced materials, second-order elasticity, phase transformations and crystal defects.
MATH 655 Direct and Inverse Scattering of Waves (Dalgaların Doğrudan ve Ters Saçılması) (3+0+0)3
Vector and scalar waves. Electromagnetic waves. Wave equation. Helmholtz equation. Method of stationary phase. Geometrical optics approximation. Elements of diffraction. Huygens-Frenel principle. Riesz-Fredholm theory for scattering. Potential theory. Weak singular integral operators. Boundary value problems for Helmholtz equation. Boundary value problems for Maxwell equations and vector Helmholtz equation.
MATH 656 Nonlinear Waves (Doğrusal Olmayan Dalgalar) (3+0+0)3
Euler's equations. Dispersion, dissipation and nonlinearity. Korteweg-de Vries equation: derivation, solitary wave solutions and conserved quantities. Nonlinear Schrödinger equation (derivation, solitary wave solutions and conserved quantities).
MATH 680 Guided Research in Mathematics (Matematikte Yönlendirilmiş Araştırmalar) (3+0+0)3
Guided research in a selected area of mathematics. Guidance of a doctoral student by a faculty member towards the preparation and presentation of a research proposal.
MATH 681-689 Special Studies in Mathematics I-IX (Matematikte Özel Çalışmalar I-IX) (3+0+0)3
Study of current research topics in mathematics by Ph.D. students under the guidance of a faculty member and presentation of the chosen topic.
MATH 690 Ph.D. Thesis (Doktora Tezi) Non-Credit
Preparation of a Ph.D. thesis under the guidance of an academic advisor. |
7de715b1c3608fe7 | I am currently learning about the Dirac formalism in quantum mechanics, but don't quite understand how we derive the expression of the quantum Hamiltonian, given the value of energy in classical mechanics.
The specific example that came up in class was that of the harmonic oscillator, for which the classical energy is $$E = \frac{p^2}{2m} + \frac{1}{2}m\omega^2x^2$$
My teacher then concluded that
$$\hat{H} = \frac{\hat{p}^2}{2m} + \frac{1}{2}m\omega^2\hat{x}^2$$
Why is that? The only way I see to show this is by looking at a stationary wave function $\psi (x)$ and using the associated Schrödinger equation. We get get that, by writing $V(x) = \frac{1}{2}m\omega^2x^2$,
$$E\psi(x) = \frac{-\hbar^2}{2m}\frac{d^2\psi(x)}{dx^2} + V(x)\psi = \hat{H}\psi(x)$$
By identifying known expressions for $\hat{p}$ and $\hat{x}$, we can find the desired expression for the Hamiltonian. However, I do not feel like this method is very satisfying, as it requires to return to wave functions, and doesn't use the Schrödinger equation in the Dirac formalism.
I am getting a feeling that teachers will eagerly replace $x$ by $\hat{x}$ and p by $\hat{p}$ when going from classical mechanics to quantum mechanics.
Is there a more general result? Can it be said that if in classical mechanics $E = f(x_1, \dots, x_n)$ where $x_1, \dots, x_n$ are observables, then $\hat{H} = f(\hat{x_1},\dots,\hat{x_n})$? I cannot see why that would be true, so is it only a coincidence that it is true in the case of the harmonic oscillator?
To summarize, is there a rule for when such replacements are valid, and if so, for which observables and how can it be proven?
Your teacher is being a bit sloppy in saying that you get the Hamiltonian for quantum mechanics from classical energy. You get the Hamiltonian for quantum mechanics by "quantizing" the classical Hamiltonian. OK, so what is this "quantizing"?
As you point out, Dirac came up with a fairly generalized scheme of constructing quantum theories which correspond to a given classical theory in (one of) its classical limit(s). Now, keep in mind that we're guessing a quantum theory that we hope to reduce the classical theory at hand in some classical limit. Given that the quantum theory is the more basic theory, we cannot derive it generically from its classical limit. Anyway, so the idea is that a quantum system that respects the same symmetries as the classical system would be a good guess for the quantum version of the said classical system. In Hamiltonian mechanics, the Poisson brackets capture the symmetries of the system whereas in quantum mechanics, commutators do the same job. Thus, it'd make sense to make commutators of quantum operators to follow the same relations as the Poisson brackets of classical observables in Hamiltonian mechanics. I'm not aware if Dirac explicitly used the symmetry arguments but he did realize that Poisson brackets are the central objects of Hamiltonian formalism and thus set out to find their quantum analog which he found in commutators. See, the chapter titled "Quantum Conditions" from his excellent book Principles of Quantum Mechanics. Once we have done this for canonical coordinates and momenta, since all observables are functions of them, we can ensure the desired commutation relations for their quantum analogs by putting hats on the canonical coordinates and momenta in their classical expressions, barring unforeseen ordering ambiguities.
This caricature description of replacing every classical canonical variable (for example, $x$ and $p$) with a hat to get to the corresponding quantum operator is not idiot-proof. There are many subtleties involved. For example, the ordering ambiguities I mentioned. Classically, you have an observable $xp$. If you put on hats, you get an operator $\hat{x}\hat{p}$ which cannot be an observable because it's not Hermitian (as you can check). There is an issue with it to start with. Classically, $xp$ is the same as $px$, so which one do you choose to put on the hats? In quantum mechanics, since $\hat{x}$ and $\hat{p}$ don't commute, the two would give very different operators (and none of them will be Hermitian anyway, so none of them can be observables). We have adopted ordering procedures to deal with such issues, for example, if you say your classical observable is actually $\frac{1}{2}(xp+px)$ which is the same as $xp$ in classical mechanics, you get a Hermitian operator when you put on hats. See, for example, Weyl ordering. However, there can be multiple such ordering schemes. This comes back to the point that "quantization is not a functor" as the saying goes, the classical limit of a quantum theory doesn't uniquely determine the full quantum theory. Ultimately, we have to guess as to which quantum theory we think would reduce to the classical theory we're interested in in one of its limits.
| cite | improve this answer | |
Dvij D. C. is correct. In a nutshell, the relationship between classical mechanics and quantum mechanics is that the former gives a lot of insight into the latter, but quantum cannot be derived from classical. Rather, classical mechanics gives hints as to what to try, and it gives insight into what quantum formulae are saying and what kind of behaviours will result in certain limits.
So every time we say "here is something classical" and "here is something quantum" the move from classical to quantum is never a derivation. It might be clearer to say "here is something quantum" first, and then add "look, it has a similar overall structure to this classical equation, so the classical equation helps us on our journey into understanding the quantum one, and it can act as a mnemonic too."
Your suspicions, then, were largely right, but it is not quite right to call the success of $x \rightarrow \hat{x},\; p \rightarrow \hat{p}$ for a harmonic oscillator a mere coincidence. There is a bit more to it than that.
| cite | improve this answer | |
Your Answer
|
d66f9d489f6d1856 | Quantum Physics
0912 Submissions
[4] viXra:0912.0029 [pdf] submitted on 11 Dec 2009
λ=h/p is Universal?
Authors: Z.Y. Wang
Comments: 6 pages.
de Broglie formula to photons in an unbounded space is E=hν and λ=h/p. According to electrodynamics, nevertheless,we prove the ratio E/p in a waveguide is greater than the product νλ which implies E=hν and p=h/λ cannot be tenable at the same time. Then the Casimir effect is applied to confirm E=hν and p<h/λ. It is helpful to study quantum tunnelling and superluminality[1]~[2], Cavity-QED, origin of mass, etc. The microwave experiment to test is also presented.
Category: Quantum Physics
[3] viXra:0912.0006 [pdf] replaced on 8 Aug 2010
Getting Path Integrals Physically and Technically Right
Authors: Steven Kenneth Kauffmann
Comments: 18 pages, Also archived as arXiv:0910.2490 [physics.gen-ph].
Feynman's Lagrangian path integral was an outgrowth of Dirac's vague surmise that Lagrangians have a role in quantum mechanics. Lagrangians implicitly incorporate Hamilton's first equation of motion, so their use contravenes the uncertainty principle, but they are relevant to semiclassical approximations and relatedly to the ubiquitous case that the Hamiltonian is quadratic in the canonical momenta, which accounts for the Lagrangian path integral's "success". Feynman also invented the Hamiltonian phase-space path integral, which is fully compatible with the uncertainty principle. We recast this as an ordinary functional integral by changing direct integration over subpaths constrained to all have the same two endpoints into an equivalent integration over those subpaths' unconstrained second derivatives. Function expansion with generalized Legendre polynomials of time then enables the functional integral to be unambiguously evaluated through first order in the elapsed time, yielding the Schrödinger equation with a unique quantization of the classical Hamiltonian. Widespread disbelief in that uniqueness stemmed from the mistaken notion that no subpath can have its two endpoints arbitrarily far separated when its nonzero elapsed time is made arbitrarily short. We also obtain the quantum amplitude for any specified configuration or momentum path, which turns out to be an ordinary functional integral over, respectively, all momentum or all configuration paths. The first of these results is directly compared with Feynman's mistaken Lagrangian-action hypothesis for such a configuration path amplitude, with special heed to the case that the Hamiltonian is quadratic in the canonical momenta.
Category: Quantum Physics
[2] viXra:0912.0005 [pdf] replaced on 15 Mar 2011
Unambiguous Quantization from the Maximum Classical Correspondence that is Self-Consistent: the Slightly Stronger Canonical Commutation Rule Dirac Missed
Authors: Steven Kenneth Kauffmann
Comments: 11 pages, Final publication in Foundations of Physics; available online at http://www.springerlink.com/content/k827666834140322/
Dirac's identification of the quantum analog of the Poisson bracket with the commutator is reviewed, as is the threat of self-inconsistent overdetermination of the quantization of classical dynamical variables which drove him to restrict the assumption of correspondence between quantum and classical Poisson brackets to embrace only the Cartesian components of the phase space vector. Dirac's canonical commutation rule fails to determine the order of noncommuting factors within quantized classical dynamical variables, but does imply the quantum/classical correspondence of Poisson brackets between any linear function of phase space and the sum of an arbitrary function of only configuration space with one of only momentum space. Since every linear function of phase space is itself such a sum, it is worth checking whether the assumption of quantum/classical correspondence of Poisson brackets for all such sums is still self-consistent. Not only is that so, but this slightly stronger canonical commutation rule also unambiguously determines the order of noncommuting factors within quantized dynamical variables in accord with the 1925 Born-Jordan quantization surmise, thus replicating the results of the Hamiltonian path integral, a fact first realized by E. H. Kerner. Born-Jordan quantization validates the generalized Ehrenfest theorem, but has no inverse, which disallows the disturbing features of the poorly physically motivated invertible Weyl quantization, i.e., its unique deterministic classical "shadow world" which can manifest negative densities in phase space.
Category: Quantum Physics
[1] viXra:0912.0004 [pdf] submitted on 2 Dec 2009
Sound Relativistic Quantum Mechanics for a Strictly Solitary Nonzero-Mass Particle, and Its Quantum-Field Reverberations
Authors: Steven Kenneth Kauffmann
Comments: 9 pages, Also archived as arXiv:0909.4025 [physics.gen-ph].
It is generally acknowledged that neither the Klein-Gordon equation nor the Dirac Hamiltonian can produce sound solitary-particle relativistic quantum mechanics due to the ill effects of their negative-energy solutions; instead their field-quantized wavefunctions are reinterpreted as dealing with particle and antiparticle simultaneously - despite the clear physical distinguishability of antiparticle from particle and the empirically known slight breaking of the underlying CP invariance. The natural square-root Hamiltonian of the free relativistic solitary particle is iterated to obtain the Klein-Gordon equation and linearized to obtain the Dirac Hamiltonian, steps that have calculational but not physical motivation, and which generate the above-mentioned problematic negative-energy solutions as extraneous artifacts. Since the natural square-root Hamiltonian for the free relativistic solitary particle contrariwise produces physically unexceptionable quantum mechanics, this article focuses on extending that Hamiltonian to describe a solitary particle (of either spin 0 or spin ½ in relativistic interaction with an external electromagnetic field. That is achieved by use of Lorentz-covariant solitary-particle four-momentum techniques together with the assumption that well-known nonrelativistic dynamics applies in the particle's rest frame. Lorentz-invariant solitary-particle actions, whose formal Hamiltonization is an equivalent alternative approach, are as well explicitly displayed. It is proposed that two separate solitary-particle wavefunctions, one for a particle and the other for its antiparticle, be independently quantized in lieu of "reinterpreting" negative-energy solutions - which indeed don't even afflict proper solitary particles.
Category: Quantum Physics |
ac9e40548b313d81 | photo blog_head_zpsonl8fonu.jpg
Meesa gonna kill you!
Get email updates of new posts: (Delivered by FeedBurner)
Saturday, September 03, 2005
Why Singaporeans are so Lucky
This is the shit. Seen in a Singabloodypore comments box:
"You guys have to be fools not to realise how LUCKY Singaporeans are
read about the experience of this Singapore who went on tour to USA and come back fully understanding how LUCKY a country Singapore is.
Singaporeans are lucky people. If you open up the newspapers in Singapore to read , there is nothing but good news and good ideas from the ruling government. There is daily hope that things will get better. Once a week someone in Singapore achieve something big - like urine power batteries. Everyone live in harmony with the government - even opposition in Singapore is nice and quiet. The government is virtually perfect based on what the Singapore news papers say and are always thinking of ways to improve the lives of Singaporeans. The govt of Singapore is always helping its people - recently to help the poor buy flats. It looks like Singaporeans are relatively problem free and happy - not much suffering in Singapore.
When I went on tour in the US for one month and when I read the newspaper, I was horrified this country has so much problems. People going on strike because they think their pay is too low. Politicians arguing with each other on various issues. Unheard of in Singapore. The newspapers is full of criticism of the govt - obviously the Leaders in America are not as smart as Singapore leaders - nobody in Singapore can find anything wrong with the PAP. The people I met in America are generally unhappy with their lives and always talking about trying to change it - they dream of becoming rich, becoming movie stars, becoming singers, becoming this on that. I even met a 40 year old guy who was attending medical school - obviously he is unhappy with his job as a waiter. Contrast that with Singaporeans who are very happy with their current existence many are happy with their current jobs and don't dream much about the future.
When I was on tour, I found out America has poor public transport system - no MRT, few public bus. Can't imagine how they get around.
Their service sector is behind us by decades, I went the hotel restaurant to eat, the first day they were very nice to me. The second day I went back, they were not nice. I was offended, so like very good Singaporean I complained. They told me I did not tip them the first day. Waliao, in America must 'tip' to get service. In Singapore, I never have to tip!
Another day I wondered on to the streets and found a man distributing anti-government material. I read it and it accused the US govt of causing the suffering in 3rd world countries. Why does the US govt allow the people to accuse it? In singapore, this person would have been arrested. Another day, a Hispanic union group gathered in a rally accusing employers of discrimination - how can they do that? don't they know that busineeses if not given a free hand to do what they want will just invest in china or india. How come their union is not working with the employer to harmonise worker relations and persuade workers to accept their current conditions.
One day when I was on the "free and easy" part of my tour I found a way to take the bus around in Los Angeles. At my 3rd stop, a cyclist wanted to take the bus, the bus driver got down the bus and help him to mount his bicycle on a structure at the front of the bus. I was disgusted, how can he do that, it caused delay to every one on the bus. I'm surprised Americans put up with that. Half way through the ride, the bus got a bit crowded, just like Singapore people don't move to the rear. At the bus stop people trying to get up shouted loudly at the people in the bus to "move to the rear guys!! we can't get up!!". In Singapore, people would have kept quiet and waited for the bus driver to do something - if nothing is done they would have just waited for the next bus. This shows that Americans are impatient lot - unlike Singapore they don't trust the person in the leadership position(bus driver) to do his job.
As I move around in America, I realise the problem of older workers getting jobs is far worse that in Singapore. I saw very few old folks working in America - they are mostly sitting in parks or playing with their grand children. I guess they do that because they are unable to find a job. In contrary, we see many old folks in Singapore working as cleaners and at MacDonalds. We should be thankful that in Singapore even old folks can find jobs. In America, old folks are jobless and sitting around doing nothing. It is obvious the American government is unable to solve the structural unemployment problem and are unable to redesign jobs for older workers. These old folks in America must be suffering without an income and spending so much time in the park when they should be working.
I went to one place that was having an election for a Mayor. So much money wasted on posters and hundreds of people wasting time campaigning for the candidates. Why do they waste time doing that? In Singapore, instead of having so many people choose a president, we simply have a panel of 3 people to do it - it is cheaper and waste less energy. Obviously America has alot to learn from Singapore.
The only place I like alot is Las Vegas. Wow the bright lights and the wonderful casinos. Yes, I like it alot although one night at the jackpot machine, I don't know what happened to me and I lost $500. For some reason, I couldn't stop myself. But later when I watch the show with the pirate ship and girls in bikini, I felt much better and forgot my loss. Certainly I'll come back to Las Vegas. You can see that the Singapore govt also realise that most Singaporeans will like Las Vegas, so they are build 2 casinos in Singapore to bring Las Vegas to Singaporeans. See, how much the PAP govt cares about you, to make sure you're well entertained and have some excitement in your life.
You can see for yourself how lucky singapore is as a country. People above 60 are still able to find jobs and continue working. Everyone living in harmony happily. No time wasted on debates and protests. No confusing concepts to deal with. After my tour in America, I'm so glad that we have such a good government that put the smile on the faces of Singaporeans everyday...and when I open up the newspapers, I can see everything is fine and dandy. Our citizens are obedient and trust their leaders.
Addendum: This was written by Lucky Tan from Diary of a Singaporean Mind
Someone: hahahaha
i just turned my voice female using garageband, and it sound just like the real thing!
i'm going to have so much fun with this
Me: wah
Someone: give me something to read
i'll record and send to you
Someone: cb..
"Experimental studies have shown that placebos, as well as being particularly effective for the relief of pain and inflammation, can for example speed wound healing, boost immune responses to infection, cure angina, prevent asthma, lift depression, and even help fight cancer. Robert Buckman, a clinical oncologist and professor of medicine, concludes that “Placebos are extraordinary drugs. They seem to have some effect on almost every symptom known to mankind, and work in at least a third of patients and sometimes in up to 60%. They have no serious side-effects and cannot be given in overdose. In short they hold the prize for the most adaptable, protean, effective, safe and cheap drugs in the world’s pharmacopoeia.” Likewise, another medical authority, quoted in a recent review in the British Medical Journal, dubs placebos “the most effective medication known to science, subjected to more clinical trials than any other medicament yet nearly always doing better than anticipated. The range of susceptible conditions appears to be limitless...
My view is this. The human capacity for responding to placebos is in fact not necessarily adaptive in its own right (indeed it can sometimes even be maladaptive). Instead, this capacity is an emergent property of something else that is genuinely adaptive: namely, a specially designed procedure for “economic resource management” that is, I believe, one of the key features of the “natural health-care service” which has evolved in ourselves and other animals to help us deal throughout our lives with repeated bouts of sickness, injury, and other threats to our well-being."
Gloriously jargon-free and easy to follow, the author makes a prima facie case for the evolution of our favourable response to placebos, though the paper isn't concluded as well as it could have been, especially since there's no restatement and summation of the thesis (heh heh).
The Belousov-Zhabotinsky Reaction
"While other disciplines of science explored the periodic -- physicists with their pendulums, biologists with circadian rhythms, and mathematicians with sinusoidal waves -- chemistry, until recently, was bereft of this study. Although there had long been evidence that the rate of some reactions changed repeatedly, many chemistry luminaries thought it would be contrary to the Second Law of Thermodynamics for a chemical reaction to oscillate. However, applying the concepts equilibrium thermodynamics to non-equilibrium systems proved erroneous.
Yet this thinking so held the day that when Boris P. Belousov, director of the Institute of Biophysics in the Soviet Union, submitted a paper to a scientific journal purporting to have discovered an oscillating chemical reaction in 1951, it was roundly rejected with a critical note from the editor that it was clearly impossible. His confidence in its impossibility was such that even though the paper was accompanied by the relatively simple procedure for performing the reaction, he could not be trouble. Arthur C. Clarke best captured this spirit of this folly with Clarke's First Law: "When a distinguished but elderly scientist states that something is possible he is almost certainly right. When he states that something is impossible, he is very probably wrong.""
Economics and Physics Envy
"The discipline of economics had gone the route of many social sciences. They had contracted "Physics Envy." That's the disease that gets you thinking that the only way to be respectably scientific is to do things the way the physical sciences, especially physics, do them.
As Jared Diamond has pointed out in his magnificent book, Guns, Germs, and Steel, social sciences cannot use the same experimental methodology that physical sciences use. Social sciences deal with human systems which are messy in the extreme and often not susceptible to double-blind, controlled experiments.
Diamond suggests several ways that social sciences can be scientific without succumbing to Physics Envy. We might apply them to economics, except that economics is not a science at all. Instead, it's philosophy, using mathematics to dress itself in the clothes of science.
Today's economics either regurgitates the obvious with a few equations thrown in or falls back on reasoning to replace experiment. Also with a few equations thrown in."
Gene Expression: Physics Envy
"In my previous post on this subject, I asserted the non-applicability of higher mathematics to economic analysis, arguing that true functions (in the mathematical sense) are missing from all economic relationships.
The root of the problem lies in the belief, held by academic economists, that deep down and in some mysterious way -- maybe only statistically -- the laws of supply and demand are like the laws of physics -- as, for example, the laws governing the attraction and repulsion of electrons and protons. But consider:
In physics, the law which describes the inverse relation between distance and force between charges (or masses) is not just a rough approximation, qualitative description, or statistical generalization. Rather, it is an extremely precise description, to roughly 20 decimal places of significance, in which measurement error plays a very small part... physicists will be the first to admit that even the most powerful mathematical machinery they are able to bring to bear on a problem can deal successfully with only the very simplest situations, beyond which their equations are useless. Thus, for example, their equations can be solved for the two body problem but not the three body problem in Newtonian mechanics; they can solve the Schrödinger equation when there is only one proton and one electron interacting, but not when there are even two protons and two electrons, let alone anything more complicated than that.
Furthermore, on those occassions when physicists do make complex predictions -- such as that nuclear fission would occur en mass, before the first atom bomb was tested (to choose an historical example) -- they do so with caution, double checking all their calculations, and hoping that they haven't overlooked something, or might accidentally set the atmosphere on fire."
"Household tasks are easier and quicker when they are done by somebody else." - James Thorpe
A question I just asked on Young Republic. Replies here are equally welcome.
I'm sure everyone knows what's happening in New Orleans now.
What do you think would happen if something similar happened to/in Singapore? Would we see parang-wielding mobs looting the CPF building as the Merlion was toppled by drunken breakdancers? Or would decades of social engineering kick in and ensure that, adhering to our Asian Values (TM), everyone would help everyone else, regardless of race, language or religion, and we would all pull through, ending up as a stronger country ever more ready to cut wages to be competitive in the face of global trends?
For the benefit of those on the RSS feed:
New blog picture
Friday, September 02, 2005
Overheard about "ASPIRE 2005" (ASIA PACIFIC STUDENT LEARDERSHIP [sic] WORKSHOP 2005), an international conference organised by the University of Malaya with participants and speakers from all over the Asia-Pacific:
"got e key to go to our room
that's where e shock came
e room sucked to e core
e whole hostel smelled like a zoo(due to stray cats' peeing in e hall...e cats could get to e 6th floor--where we stayed...to quote a hongkong fren...e smell was like elephants' poo)
e whole place was dirty(surprising since UM invited international students and it's pretty obvious that not much of cleaning has been done to e hall)
i had to share my room with someone (who was quite a nice person)
but we only had 1 key among ourselves
and upon opening the door
it was another big shock
e room was dirty
with no tiles(ie cemented floor)
e mattress was thin like a sheet of paper(ok la..not that thin...but it's thin enuf for me to feel e BED itself when i slept on it)
no blanket was given(later was given a bedsheet to be used as blanket---e v thin n rough kind)
there was no lan point or phone line in e room
there was no wireless internet access either
we left our room to go join some icebreakers conducted in e hall(and since we were already v late...we were quite extra)
started next day having quite a lousy breakfast(everyth was v sweet...this was e prelude to e rest of e meals provided by UM) On later days, they told us to come for breakfast at 7am and knocked on our doors at 7 in e morning cos they think we oversleep, but when we went to e canteen no one was there and e caterer only came at 8am.
had e 1st dialogue which was titled “China’s Economic Overheat & Its Impact on the Asia Pacific”.
e scheduled speakers were ppl like Dr. Mahathir Mohammad and Mr. Zhou Xiaochuan (Governor of People's Bank of China)
but e speakers that turned up was ASEAN secretary-general(not too bad) and manager of the central bank
this is quite a disappointment cos basically e panelist has been "downgraded" drastically...
i have looked forward to Mahathir and e governor of central bank speaking cos it's not like everyday u get to meet such bigshots and listen to them talk
this is followed by a screwed-up discussion(participants were split into grps to discuss e issues raised and to present our findings on e 3rd day) moderated by a student from malaysia(UPSI=teaching college)
he was dominating e whole discussion and making nonsensical deductions
in e end he got reprimanded by someone in e grp for his gibberish...which made him shut up and everyone else v happy
then we had second dialogue(Kyoto Protocal – It’s role and the Mechanism in Achieving it.) and again none of e scheduled speakers came
2 malaysian meteorologists came
e talk was pretty boring but not as bad as e 3rd one which was to come e next day
had opening dinner at night
reached e hotel at abt 7 plus but dinner only commenced at 9,30pm
their guest-of-honour(education minister of malaysia) was so late
and e whole dinner transformed into a pre-celebration of e national day of malaysia
e national anthem of malaysia and UM's anthem were played
numerous speeches were delivered before we could tuck in
and after e dinner
ppl started singing community songs of malaysia and e whole dinner turned into someth like a pseudo-pre-celebration of e national day
e malaysians felt very high after singing i think
but it was v weird for them to do such things since there were so many international students around
shouldn't e focus of e dinner be on students and e workshop(and not malaysia itself???)
reached hostel at abt 1am
went to shower
and e bathroom sucked to e core again
there was no showerhead(jus a pipe sticking out from e wall)
no heater
no toilet roll(on e 1st day of e workshop...was provided later)
and e toilets did not smell too nice
u can imagine bathing at 1am without heater
it was freezing cold and e water jus gushed onto ppl who are showering cos there was no showerhead
we showered everyday at 1-2 am cos we ended like that everyday
it's mad
next day had e 3rd dialogue(The Significance of Cooperation Amongst Asia Pacific Universities in Moulding Quality Graduates for a Globalize World)which was crappy to e limit
2 deans from UM spoke cos none of e scheduled speakers came again
e speakers were unprepared and sounded more like promoting ppl to go n study in UM than to assert any opinion for e topic matter
then we had to prepare for presentation
which happened to be e time in which ppl from my grp found out wad we have done thus far was pretty much nonsense
i got shoved/forced/coerced to presenting e 1st qn
and someone else was forced to present e 2nd qn
e moderator guy volunteered to present e 3rd qn
presentation started at 9pm and ended at 12,30pm
this was e 1st time in my life i did a presentation at 12 midnight
had to greet my audience good morning
i think there's someth drastically/awfully/totally wrong with e way e programs have been scheduled
we shldn't end every day at 1 am plus
it's crazy considering we r supposed to wake up and have breakfast by 8am
e next day i went to kampong lonek(a kampong in negeri smebilan)
spent e whole morning on e bus..reached e destination at 12 plus in e afternoon and we were again shoved/forced/coerced to walk under e sun to take a walk ard e village
e trip turned out to be pretty ok except that it was v hot and e food wasn't v gd
but e non-muslims had to wait for e muslims twice cos they had to pray and we sat ard doing nothing much
i dun think muslims from other countries pray that much,do they?
we had dinner at some R&R station(e pitstops for ppl driving along north-south highway..and obviously good food was impossible to get there)
in e end i skipped dinner and jus drank ice blended chocolate at this pirated starbucks called Shalala cafe
apparently they had nothing to sell except nasi lemak and ice blended chocolate
dunno how ppl do business like that
it was at this time when 4 of us from e same bus got together and started complaining v loudly in e cafe(all 4 of us found nothing that was palatable and jus kept ourselves occupied by complaining non-stop)
ppl from UM heard us
e crew in e cafe were staring at us
but who cares
this was when we formed this grp to rebel against e UM ppl
e working attitude of e crew at Shalala was exactly e same as e ppl from UM(specifically e malays in UM...e chineses in UM were quite kind to us...i dun understand this racist behavior of e UM ppl...if they wan to organise an event like this...no point thinking along racial lines...it shows how narrow-minded you are and it reflects badly on e uni,e country and everyth else)
anyways we then carried on with e bus journey at abt 7 plus and reached putrajaya at 12 midnight(amazing...this is e time ppl visit night safari i think..not for phototaking at e administrative capital)
on e way we were busy complaining loudly on e "ill-treatment" of participants
throughout e whole journey we were treated to Siti Nurhaliza's(famous malaysian singer) MTV
not that many of e ppl on e bus could understand
only e 2 or 3 malays sitting in front wanted to watch that lousy vcd(which had prob playing after being played continuously for many times throughout out journey..started skipping and in e end was turned off--good riddance)
reached putrajaya and had a super-unbelievable experience
was shoved from places to places for photo-taking but there was nothing much to take actually
cos it was too dark and we were too far away from e buildings(roads were sealed for their national day celebration)
in e end we left putrajaya at 1 plus and arrived at UM at 2am(fell asleep on e way)
had KLCC tour on tues
it was better than previous days although it was also v rush
got to have 1.5 pathetic hours to shop at petronas towers
in e end grabbed a top and a jacket without half an hr cos e 1st hr was spent on eating(i had to buy clothes for e closing dinner--din bring enuf clothes cos i though there would be enuf time for me to shop around..according to e original schedule...but by now everyone reading this shld have realised nothing went according to e original plan...i wished i could sue them for deceiving the participants...we were drastically misled)
went to some handicraft centre which no one showed much interest(and we spent more time there then KLCC...we were complaining loudly abt lack of shopping time)
and went to KL tower after that(someth i dreaded cos i have been there twice before just recently)
my 3rd visit was spent on oogling at ppl in swimming pools near e KL tower using e binoculars in e tower..anyways i din start this oogling..forgot who it was but many ppl were doing e same thing... i guess e 5 days of torture so far had turned everyone more pervertic..haha
then returned to UM for closing dinner
e food sucked
i took one mouthful of veg and puked cos it was strong in pesticides smell
din continue eating
anyways i found myself feeling full all e time when i was in KL
i think e food couldn't whet my appetite
took many pics with e other participants cos most ppl were in traditional costumes
and then went out for supper at chinatown area
watched fireworks(released to celebrate national day)
and then returned to UM
e last day in KL was spent shopping
but i could only get a wallet cos we were stuck in e jam for a very long(traffic jam due to high traffic in KL...as it was a public holiday)
then rushed our way back to UM
got out luggage and took a limo back to KLIA
... hereby i conclude my report on my KL tour
all in all it sucked
to quote my fellow participants
"you get what u pay..paying 20USD for 5 nights of accommodation and food means u get 20USD worth of treatrment"
"aspire is totally bullshit"
"aspire sucks"
but someth good came out of it as well
made friends with many ppl cos we bonded together as we braved e bardship
... i felt like i was a POW when i was in KL
no words can describe how glad i am to be back"
Ma-laysia boleh!
A sad tale of the parochialism that often afflicts Singaporean Chinese and Malaysians; assuming language proficiency in another even when it is obvious none exists and refusing to converse in a language both parties understand: yax-471 Here, the first thing people register is your race
"2 pm at Han's diner in the basement of Park Mall. The lunch crowd had not quite dissipated and the staff were still busy. A couple walked in and moved to find their own table, discussing between themselves, in English, whether to take the one on the left or another one further in.
From the guy's accent, I figured he was an American-born Chinese (often acronymed as ABC). There are an increasing number of them in Singapore as we draw more and more professionals from all parts of the world.
The woman seemed to be a Singapore-born Indian or of mixed parentage. Her accent was identifiably Singaporean.
Just as they had decided on the table to their left and were about to sit down, a waitress went up to them.
"Nimen shi liang wei, shi ma?" the server said to the guy. She ignored the woman companion.
"I'm sorry?" ABC said.
"Nimen shi liang wei, zhuo nabien." The waitress pointed to a smaller table with two seats. The table the couple preferred had four.
By her gesture, ABC could guess that she wanted them to sit at the smaller table. "This table is taken?" he asked in his unmistakeable accent.
"Zhuo nabien; nimen shi liang wei."
The couple got up and walked out.
The waitress didn't seem to care. She then turned to another customer, a silver-haired Caucasian man who had been gesturing for service for some time.
"Order already?" she asked him. She’s evidently able to speak English."
Hackstadt.com - Exploding Whale
You have reached the definitive Exploding Whale website on the Internet!"
They have the original video! *throws confetti*
Thursday, September 01, 2005
"My favorite animal is steak." - Fran Lebowitz
Random Playlist Song: Brahms - Symphony No. 4 in e minor, Op. 98 - III. Allegro giocoso
More random stuff that lands up in my mailbox:
"asia terror threat
we take seriously french anti-terrorism magistrate jean-louis bruguière's claims that al Qaeda is preparing an attack on an asian financial center. french counterterrorist sources tend to publicly reveal such threats only when their information is particularly well developed and when available countermeasures to prevent the attack and capture the attackers are considered insufficient. there's little use in speculating on the timing of such an attack (other than the obvious 9/11, which would coincide with japan's snap elections), and both the targeting of financial centers and of countries which support the united states are consistent with broader al qaeda strategy. additionally, it is worth noting that the cities explicitly targeted were sydney, singapore, and tokyo--there's no desire on the part of al qaeda leaders to target china (hong kong or the mainland) and to broaden international opposition to their terrorist movement accordingly.
having said that, it is worth reiterating that al qaeda as a broad movement remains considerably less capable of marshalling global resources in a coordinated way than it was prior to the september 11 attacks, the result of a coordinated focus on known and suspected leaders and financial channels. to the extent that there's an exception to this diminution of capacity, it would be in east asia. counterterrorist efforts there are clearly weaker, but the domestic groups that directly aid (or at least help conceal) terrorist cells are also far less in evidence."
"Singapore is a dream for every public planner :) It's especially easy to impose e.g. a gas tax, which would require a long and nationwide political process in other countries."
Vegetarian furore as Gandhi is used to promote eggs - "Devotees of Mahatma Gandhi, a vegetarian to the point of neurosis, have been engaged in a furious row after the Father of the Indian Nation was chosen as a "brand ambassador" for eggs - a food he never ate... The leaflet quotes from a 1948 article by Gandhi entitled Key to Health in which he challenges the received wisdom among India's strict Brahmins that eggs are "flesh food" and not to be eaten. "In reality, they are not," Gandhi wrote. "Nowadays sterile eggs are also produced. The hen is not allowed to see the cock and yet it lays eggs. A sterile egg never develops into a chick. Therefore, he who can take milk should have no objection to taking sterile eggs.""
This all smacks of a religious dispute.
The Onion | Evangelical Scientists Refute Gravity With New 'Intelligent Falling' Theory - ""Closed-minded gravitists cannot find a way to make Einstein's general relativity match up with the subatomic quantum world," said Dr. Ellen Carson, a leading Intelligent Falling expert known for her work with the Kansan Youth Ministry. "They've been trying to do it for the better part of a century now, and despite all their empirical observation and carefully compiled data, they still don't know how." "Traditional scientists admit that they cannot explain how gravitation is supposed to work," Carson said. "What the gravity-agenda scientists need to realize is that 'gravity waves' and 'gravitons' are just secular words for 'God can do whatever He wants.'""
Curiosities from Japan's porno shops. - "As everyone is well aware, Japan is absolutely brimming with bizarre shit, particularly when it comes to adult material. Tentacle rape, bestiality, people shitting on each other... They've got it all. So when I stumbled upon a seven-floor adult superstore, I knew I was going to walk out with some amazingly weird stuff. First, though, there's plenty of pervasive material available right out on the street, before you even make it into a porno store. For example, these delicious-looking treats I found at a market - "Yokohama Bust Pudding""
Church: God Punishing GIs Over Gays - "Members of a church say God is punishing American soldiers for defending a country that harbors gays, and they brought their anti-gay message to the funerals Saturday of two Tennessee soldiers killed in Iraq. The church members were met with scorn from local residents. They chased the church members cars' down a highway, waving flags and screaming "God bless America.""
With all the links on alternate sexuality I post, I wonder if people think I'm gay. Hmm...
Photo Gallery (Tube Stake) - "Claim: Photograph shows bulletin warning London Underground travelers not to run on the platforms or concourses. Status: False... A Customer Service Advisor for the Central line confirmed for us that the bulletin shown in the image was not one actually posted by Transport for London ("I'm pleased to tell you that it's a hoax. Somebody has a very strange sense of humour."). It appears to be a digitally altered version of a genuine photograph of a bulletin which was displayed in a photo gallery on the BBC's web site."
I knew something was up with that picture. Good ole Snopes.
Wednesday, August 31, 2005
Why Men Can't Win
If you put a woman on a pedestal and try to protect her from the rat race, you're a male chauvinist. If you stay home and do the housework, you're a pansy.
If you work too hard, there is never any time for her. If you don't work enough, you're a good-for-nothing bum.
If she has a boring repetitive job with low pay, this is exploitation. If you have a boring repetitive job with low pay, you should get off your ass and find something better.
If you get a promotion ahead of her, that is favoritism. If she gets a job ahead of you, it's equal opportunity.
If you mention how nice she looks, it's sexual harassment. If you keep quiet, it's male indifference.
If you cry, you're a wimp. If you don't, you're an insensitive bastard.
If you make a decision without consulting her, you're a chauvinist. If she makes a decision without consulting you, she's a liberated woman.
If you ask her to do something she doesn't enjoy, that's domination. If she asks you, it's a favor.
If you appreciate the female form and frilly underwear, you're a pervert. If you don't, you're a fag.
If you like a woman to shave her legs and keep in shape, you're a sexist pig. If you don't, you're unromantic.
If you try to keep yourself in shape, you're vain. If you don't, you're a slob.
If you buy her flowers, you're after something. If you don't, you're not thoughtful.
If you're proud of your achievements, you're up on yourself. If you don't, you're not ambitious.
If you're totally beat after a hard day, you don't give a damn about other people's needs. If she's totally beat after a hard day, she's tired.
If you want it too often, you're oversexed. If you don't, there must be "someone else".
ketsugi informs me that wiser minds have obviated the need for me to conceptualise an iNothing advertisement campaign with iProduct:
"Announcing the Apple iProduct.
"I buy Apple products. It just makes me feel special." - Joan M'Benga, ethnic looking clip-art model
Apple iProduct. You'll buy it. And you'll like it.
Do you like Apple products? Do you live for every product announcement, every incremental upgrade, every rumor and fake screenshot? Do you wank and blare and drone and fucking gurgle about Apple products morning, noon, and night? Then get ready for iProduct. You'll be blown away. No matter what it is.
The power to buy anything — and feel good about it.
What is it?
We're not saying yet. But we know that won't stop you. Post at length about it on every message board you have access to. Come up with fake product photos and post them, too. Start rumors or deny them. Compare it with existing products, even though you don't know what you're comparing them to. With Apple products, rampant, fruitless speculation is easy and fun.
When can I get it?
Relax, hipster, we'll tell you when it's ready. And you'll tell everybody else. Whether they care or not. You'll clog every blog, forum, and message board in the known universe with product photos, testimonials, and praise for Apple. And the complaints and insults you receive are just proof that you're right.
How much will it cost?
Like you care. As you already know, it'll be twice as expensive as other companies' products with comparable features. But that doesn't matter, does it? No matter how much it costs, you'll feel special because you've bought an Apple product. If you forget how special you are, just look at your credit card statement.
Apple iProduct.
Your life. In a small, shiny, plastic case.
Also see: get an (i)Life
I'd also add something about including few features because users can't deal with them and the slogan: "Think different. Don't buy an iProduct."
There's a rebuttal, but of course I don't think it's even half as funny as the original.
I just watched March of the Penguins. It was terrible.
Too long. Too draggy (the two are not the same). Too anthropomorphic. Too much French existentialism. Too much of the 7-year-old French girl singing trite lyrics which don't match the music. Too much trance/New Age music (I bet they want to sell their OST).
You come out of the cinema hall knowing very little more about the Emperor Penguins than when you first entered. "National Geographic for idiots", as my sister pronounced it. No wonder it's so popular in the USA.
The cinematography wasn't bad, but it's nothing you can't find in your run-of-the-mill nature documentaries. All in all, it'd have been better if they'd wiped the audio track, made many judicious cuts and got David Attenborough in to do the narration.
(Then again, watching the trailer for the English version narrated by Morgan Freeman on Apple.com, devoid of the improbable soliloquies of French-speaking penguins, with a different and less annoying soundtrack and produced by National Geographic films, you'd think it was for a totally different show. And so perhaps it is.)
Tuesday, August 30, 2005
IPI's 2004 World Press Freedom Review
""Singapore has become as rich as it is because it has a strong rule of law," Backman argues. "The rule of law requires that laws be written down, that they are precise and that they are gazetted." Such vague guidelines on what can and cannot be discussed contradicts this commitment."
It was once argued to me that having OB markers are good because they allow for flexibility. Ignoring the question of whether we need OB markers in the first place, I suppose then that abolishing all our laws is also good because that too allows flexibility. We know that the wise ones will make the right decisions in the end, after all.
"Participating States will respect human rights and fundamental freedoms, including the freedom of thought, conscience, religion or belief, for all without distinction as to race, sex, language or religion." - Extract from the Helsinki Agreement, 1975, signed by the USSR
a) freedom of speech;
b) freedom of the press;
d) freedom of street processions and demonstrations.
people and their organizations printing presses, stocks of paper,
public buildings, the streets, communications facilities and other material
requisites for the exercise of these rights." - Article 125 of the 1964 Constitution of the USSR
Rhetoric is all well and good, but if nothing changes except allowing a bastardised form of bartop-dancing, it's not worth the paper it's written on.
"Fallen heroes do not have children. If self-sacrifice results in fewer descendants, the genes that allow heroes to be created can be expected to disappear gradually from the population." - E.O. Wilson
Random Playlist Song: Raffi - Bananaphone
If only more textbooks read like Varian (or if only he wrote more of this sort of thing in):
"Some tedious algebra shows that *long mathematical expression*
(Don't worry, this formula won't be on the final exam.)"
"In general, the firm faces two sorts of constraints on its actions. Frst, it faces the technological constraints summarized by the production function. There are only certain feasible combinations of inputs and outputs, and even the most profit-hungry firm has to respect the realities of the physical world."
This must be the most biting Economist editorial I've ever read:
[On the Uzbekistan massacre] "The European Union and America have expressed their horror at the worst massacre of demonstrators since Tiananmen Square by imposing the following sanctions on Uzbekistan:
Ooh, nasty. And there's a whole page of letters on ""Intelligent" Design", so the Bohemian Bunnie, who is saddened by the lack of Creationist bashing this term, can blow herself off.
"Dear students,
Ever wondered about the work of the Internal Security Department (ISD)? Like to know more about Singapore’s counter-terrorism efforts?
Come to the ISD Heritage Centre cum Counter Terrorism Mobile Exhibitions:
Date: 9 - 13 September 2005
Time: 10am – 10pm daily
Venue: Gek Poh Ville Community Club
Jointly organized by the ISD, South West CDC, Hong Kah North CCC, Gek Poh Ville Community Club MC and Home Team (Southern & Western Sector), the ISD Heritage Centre Mobile Exhibition gives an insight into the work of the ISD and security threats posed by communal/religious extremism/international terrorism to Singapore.
The Counter Terrorism Exhibition features latest technology/equipment in the fight against terrorism, information on staying vigilant and updates on terrorism trends. What’s more, there will be interesting fringe events like films, talks, skits as well as online games and quizzes to excite you."
Hahahahahahaha. If I join the ISD I can go work for the Straits Times after that.
Someone on Bastiat's broken window fallacy: "Many economists and economic journalists do advocate "broken window economics" without making it explicit that that's what they're doing. It happens because in order to avoid these errors you need a theory of value, and a workable theory of value can't be derived mathematically or obtained empirically; it requires philosophical investigation into normative (rather than positive) claims, something that economists are generally loath to do or explicitly say it isn't their business to do."
I saw a weird fixture on the wall of LT11. It was pointed at me and a red light was shining. As I stared at it, the red light suddenly switched off. During the break, I went down and it appeared to be a camera, with the words "3 CCD" engraved on it. If they're going to spy on us, they might as well webcast the lectures while they're at it! Aside: Someone asked me why the RJC staff room needed a fingerprint scanner. It's the same reason why we have CCTV cameras in NUS LTs and in Geylang! Big Brother is watching.
It's hard to quote people when you don't even know what they're supposed to be saying in the first place.
[Female student:] If you could make out with someone famous, who would it be? [Male student: A lot {of people}.]
If you woke up tomorrow and you were a guy, what would you do? [Tutor: That's a question all of us would've been asked once in our lives.] [Student: I would check out myself in the mirror.]
[Student to a guy: If you were shipwrecked on a desert island with another female with another female, what would you do?] Come on, we know what he'd be doing... Play chess maybe.
I am what you would call a Submarine Catholic. I surface when I'm in trouble.
Suppose your parents couldn't swim. Both of them fell into the water, and you could only save one. Which would you save? [Student: I wouldn't save either... I'm in this phase - hating my parents]
What you learn in University: never believe course descriptions.
[Tudung girl:] I am a Muslim - obviously.
My name is ***. I'm a life science major and I don't know which year I'm in.
I'm passionate about flerms. All sort of flerms... Hollywood flerms (films)
[Student on someone else having no one to ask a stupid question: Why don't you ask him?] I'm the tutor. I'm exempted!... It's okay, it's okay [I'll answer].
As far as possible, I won't try to burden you. I won't make you write a lot of things, do little projects. If you want to we can do that! [Students: Noo...] ***, give me more work!
[On Taylor's 'Primitive Culture'] It's quite fat. It's big. Very concise.
[On rousing my wrath] I know all your buttons already. Just display extreme stupidity. [Me: Then you just debase yourself.] I don't mind.
[On a picture of a caveman and Dubya side by side] I don't want people to think that I'm comparing the mental capacity of the leader of the United States and that of our friend Fred.
[On 'Intelligent' Design] We can propose 'intelligent attraction'. Why do things fall from a height? Gravity is just a theory.
The question I want to ask is that who was on top? Not in that way [presumably sexually] but evolutionarily.
central limit theory (theorem)
We have to tran'fer a little bit. (transfer)
[On p-values] If you really don't understand what I say, you can just memorise this... It still work. (works)
As a statistician or an economist you are just a consultant. You do not make the decisions.
[On choosing the null hypothesis] Just like OJ Simpson case right. We believe he is innocent, then gather evidence to show he is guilty. (Simpson's)
[On the t-statistic] You tell your boss: Sample average must be greater than C. He won't be happy. What is C? You will get fired.
[On student unhappiness and apathy] If students are unhappy, the reputation of NUS goes down. If the reputation of NUS goes down, the value of your degree goes down the drain.
A lot of students are very radical. If NUSSU were to lead a protest march up Kent Ridge Crescent, I think a lot of them would come.
[On NUSSU positions] I'm so tired of hearing Engineers. I believe Arts students are superior, because of the modules you take.
[Me: When I smile, I look like a mad man.] If you don't laugh hysterically, then you look alright.
[On a pay as you go pensions system] A lot of European countries have something similar to this, which is why they are all going bankrupt.
[To me] I like it when you corect my English. Reminds me of my RGS days.
At the same tair'm (time)
Both kin be better off (can)
this cree'tea'ria (criteria)
I think we are agreed (we'll have a break)
compee'tive market (competitive)
the grim lance (green lines)
We reach a general equi'br'erm (equilibrium)
pass through the original de'mer'n (endowment)
parting through this black point (passing, blue)
p'rare'der'tore optimal (pareto)
total enrolment (endowment)
the re'sh'you of the prices (issue)
Well russ law (Walras')
Monday, August 29, 2005
Joke is on religion as Christians laugh at themselves
"Religious jokes will be told to hundreds of Christians today in an attempt to determine whether they would fall foul of the Government’s religious hatred legislation. In “The Laugh Judgment” competition, more than 4,000 people voted on 700 religious jokes sent in to the satirical Christian website ShipofFools... Some of the jokes were so offensive that they do not bear reproduction. One of the worst, a masturbation joke about Jesus, so upset the Church in Denmark, where it was first told, that religious leaders raised money to send the comic responsible to Israel to educate him. He gave the money to charity."
Ship of Fools: The Laugh Judgment
"Heard the (banned ) joke about the Rabbi, the Priest and the Imam? While the UK plans to outlaw the vilification of religion, we launch a serious search for the funniest, and potentially most offensive, religious joke ever – while there's still time.
In the garden of Eden lay Adam
Complacently stroking his madam,
And loud was his mirth
For he knew that on earth
There were only two – and he had 'em.
We are not told, in this fragment of early tradition, how the Lord responded to Adam's merriment – a shame, because it might have shed some light on the eternally-funny relationship between religion and humour. Ever since then, they have had a complex and controversial partnership.
Admittedly, the limerick above may not be the most offensive religious joke you have ever heard. Of course, it brings the Bible and breasts dangerously close together – but then so does the Song of Solomon...
Ridiculing religious beliefs, criticising religious practices and offending religious people is surely a mission from God. Not in all cases, necessarily, but certainly in some. It's not a freedom so much as a responsibility.
Ship of Fools has never had much interest in mocking other religions. If truth be told, we're a bit blinkered and don't know enough about them. But to mock the excesses of our own, that's what we were put on earth to do. When Christianity gets dangerous, irrational, nasty, or just plain nuts, then insulting and abusing it is not just a pleasure, but more a profound calling.
The really confusing thing about the proposed law is that if "material that is threatening, abusive or insulting" to religious groups is outlawed, then both the Bible and the Qu'ran will be technically illegal. Both denounce false, wicked and foolish religion in the strongest terms, and have proved only too capable of stirring up religious hatred. The law that is supposed to protect religion could make it criminal."
At least some people out there have a sense of humour. Most of the jokes submitted aren't offensive at all, but a few are really biting.
On Thursday I attended a CORS bitch session organised by the Arts faculty administration. Apparently the Arts Club had heard about the online petition and had notified the faculty administration, which then organised the session. So despite nary a word coming from its collective lips in public, it was working behind the scenes. Be that as it may, there wasn't (and still hasn't been) any word from the other faculties or indeed the University Administration in general about the CORS cockup, so even if they are working behind the scenes to solve the issue, students (NUS's customers, after all) are not being placated and are just becoming ever more disenchanted and apathetic.
The email about the bitch session had gone out to 5,000 Arts students, but only 5 normal students turned up (with 2 representatives from NUSSU and 1 from the Arts Club]: a miserable response rate, which I theorised was due to several factors:
- The bitch session being held at 10am, some people had classes.
- Those who didn't have classes didn't feel like waking up earlier to come to school for it. It would've been better to hold it in the afternoon or evening, since people would rather stay back in school than come to school earlier.
- Those for whom Thursday was a free day didn't care to sacrifice it to come back, not being as bo liao as me.
- The vast majority didn't care, module and tutorial allocation being over already. Either that or they thought that "No one cases (sic). It's just a cover up session offering lame excuses." or "u have too much time man, most pple dont give 2 hoots abt it"
The minutes of the session are supposed to be emailed to all Arts students, but some excerpts follow, with personal comments.
- NUS already rents many more servers during the CORS period. So in a sense it's already outsourced. However there's a minimum duration you can rent them for, and renting servers costs money. After a certain point you get diminishing returns, and spending more money on servers isn't feasible, unless school fees go up. Again.
NTU has a 'fastest fingers first' system, which invariably requires a lot of servers. Yet, they seem to be able to cope, at least hardware/bandwith-wise. I still believe outsourcing bidding to eBay/Yahoo Auctions is a good idea because, if nothing else, they have more servers worldwide to act as buffers in case of high traffic.
- The first method of allocating modules NUS had was going to the lecturers personally to get them to sign on your forms. Consequently, people camped in school overnight. The next system tried was balloting, but what would happen was that people balloted for modules which were hard to get, rather than those they wanted to do. They would reason that they'd be able to ballot for the unpopular modules later, and would surely be able to get them. Some people would get 8 modules, and some 3 (or 1, or none), and have to run all over the school in the process too. The whole system also suffered from a lack of transparency.
Personally, I feel that CORS is preferable. Human nature is perverse, and humans are irrational. People don't like bidding for what they perceive as their birthright or due as customers; they rather queue for NDP tickets than bid for them, for example. I've no information on how much students bitched about the previous two systems, though. Perhaps their perceived democratic and/or fair natures satisfied students. Or perhaps the Internet lubricates information flow and makes complaints more likely to be heard.
In any case it seems other universities manage to allocate modules in a way more acceptable to students than NUS can. Cornell allocates modules first by seniority then on a first-come-first-serve basis. Perhaps they have smaller class sizes and a better faculty:student ratio, so they can afford to be more flexible in increasing the enrollment of popular modules. Furthermore, students don't mind the seniority factor because they know that one day it will be their turn to be favoured for modules.
- CORS usually only functions during office hours because of the hours technicians keep. However, in future it is plannned that CORS will keep longer hours in future, even if it at the expense of the technicians.
If you want to go high-tech, you must go all the way, so. Maybe for their work they can get more vacation days.
- I suggested that a University-wide module preference exercise be conducted so resources can be better allocated; right now only Arts conducts it, and only for Arts modules in Semester 2, and core major modules in Semester 1. Right now less than 50% of Arts students participate in the exercise, but the faculty cannot reward participation or punish non-participation with bidding points (as with the current Module Feedback exercise). However, apparently the timetable for Semester 1 cannot come out so early, and other faculties don't want to or cannot carry out such an exercise, so implementing this project on a University-wide basis is unfeasible, especially since Arts was the one mooting CORS in the first place (due to its wide number and variety of modules and flexible curriculum structure).
- There is a limit to how big class sizes of popular classes can be so teaching quality is not compromised. Fire safety regulations and a limited number of class/exam venues also limit class sizes.
A variety of modules also has to be offered: we can't only have a small handful of large, popular modules. Psychology is actively recruiting to solve their staff shortage. Apparently Economics used to have the same problem, but now we have recruited much foreign talent.
- I suggested an ERP-like tax on late bidders, graduated according to how late they were. eg Those who bid in the last 10 minutes pay a 100 point premium, those who bid in the last 30 minutes pay a 50 point premium and those who bid in the last hour pay a 25 point premium. This will enable us to do away with the ridiculous close (sic) bidding, which is in the first place antithetical to the supposed aims of CORS - unless it's supposed to train us in the finer arts of gambling.
- The purpose of having many rounds of bidding is to protect first major students then faculty students. Yet sometimes you can get a module for less in later rounds. I suggested having a rebate - if a module goes for less in future rounds you get refunded the difference, but was told that by that time the student would have gotten other modules and the points would just roll over to the next semester, and he'd be able to mow other people down. The solution, I was told, was better planning of module quotas by departments.
It still smacks too much of central planning to me. I still suspect that the market could do a better job, if only it was liberated. Oh well. Possible ISM/Thesis: The Economics of CORS.
- The reason for the downtime during tutorial bidding was that since some Science/Engineering tutorials had places allocated on a first-come-first-served basis, the word got around to the Freshmen that all tutorials were allocated so, ergo the downtime.
I was unable to login at 9+, 11+ and 1+, and it was still a little slow at 7+. I find it difficult to believe that everyone was bashing at their keyboards for more than 4 hours. Besides which, CORS has been around for since the 2003/2004 academic year. Surely such a serious issue should have been foreseen and anticipated after so long. And apparently 2 years ago CORS was quite stable.
- It was suggested that a real time system on another server showing minimum bid points be set up, like a stock market counter. People would be able to view bids without logging in.
- SMU's system of closed bidding where you pay what you bid was brought up.
Paying what you bid is a VERY scary prospect.
- One aim of CORS is to have as many modules go for 1 point as possible.
- Staggering tutorial registration times throughout the day was considered, but that would be a pain for and disadvantage those having lessons.
- There is room for flexibility: when students appeal, priority is given to help graduating students graduate in time, then major students and so on.
There was also talk of better avenues for communicating with students. Apparently some people (mostly freshmen) don't know they have NUS email accounts. So SMS/corporate blogging/an IVLE 'Arts' module were suggested.
All in all, it's good that the Arts faculty, at least, is reaching out to students. However, ultimately it remains to be seen what will be done about the issue. At the very least, an apology would be the least they could do; blithely blaming students for the system's faults without acknowledging failures in planning is unhelpful at best. It also remains to be seen if the University in general will display similar enthusiasm in engaging with its future alumni (and presumed support base).
(Captain Intrepid has written an excellent summary of CORS and its advantages and disadvantages, though I disagree with his endorsement of close (sic) bidding.)
Sunday, August 28, 2005
Varian makes 280 pages of prescribed reading (for 1 module alone, and not all of it relevant) in a span of 2 weeks less onerous:
On consumer theory: "The second axion, reflexivity, is trivial. Any bundle is certainly at least as good as an identical bundle. [Ed: ie 2 apples and 2 oranges is considered at least as good as 2 oranges and 2 apples] Parents of small children may occasionally observe behavior that violates this assumption, but it seems plausible for most adult behavior."
On budget lines and kinked indifference curves: "This case doesn't have much economic significance - it is more of a nuisance than anything else."
On normal goods: "We would normally think that the demand for each good would increase when income increases, as shown in Figure 6.1. Economists, with a singular lack of imagination, call such goods normal goods."
The bak kwa from Kuala Lumpur and Macau tastes different from that from Singapore. They both have a funny, unpleasant taste and smell. So though it's cheaper to buy bak kwa from there, it is not a better deal.
I saw 18 "CWS Ladycare" bins at a lift landing one day. I've always wanted to have one at home: they're funky because they "contain antimicrobal gel", so I can throw used tissues into them without fear. A pity they weren't automated with a motor and sensor like the one I saw in Tasmania. That'd have been worthy of a place in a museum.
[Professor: Luqman. Do you preferred to be caleld Abdullah?] Most people call me Abdullah, but my parents call me Luqman.] I have no quick answer to that.
[On the Nicoll highway foreman] Remember his name? Remember his name? Mr Ho. I can't remember the rest of his name.
[On not catching what a student was saying] If you exhale and do not inhale, you can prevent a sneeze. This will be very useful if you are in ambush, enemy soldiers... I'm sorry. I had to distract myself while trying not to sneeze.
[On someone volunteering to do 5-6 people's laundry and ironing daily] I can think of only 2 explanations. The first is that he has some sort of eccentric fetish for ladies' underwear, and the only way he can conceal it is to wash everyone's clothes.
[On a documentary about elephants digging graves for old females and standing around while the old female stands in it, then covering the grave with leaves and bamboo] The biologist in me is screaming and shouting and crying out... This is Nobel Prize Zoology... Which channel did you see this on? There's lots of stuff on the Discovery Channel about the supernatural. I don't believe any of it.
This is our first tutorial, and our objective for today is to get to know each other. That's the most important thing. if we have time, maybe - just maybe, we'll talk about Muller.
Tell me what you want to be called. If your boyfriend calls you 'Pookie', and you want to be called 'Pookie', that's okay.
[On icebreakers] Then you come up with a stupid question for the next person. If you give a stupid answer, it's ok because it's a stupid question.
Psychoanalysis... You can make stupid claims. Penis envy... When you read about it, it sounds sort of true, sort of not true.
"In our own day and country, the notion of souls of beasts is to be seen dying out. Animism, indeed, seems to be drawing in its outposts, and concentrating itself on its first and main position, the doctrine of the human soul. This doctrine has undergone extreme modification in the course of culture. It has outlived the almost total loss of one great argument attached to it - the objective reality of apparitional souls or ghosts seen in dreams and visions. The soul has given up its ethereal substance, and become an immaterial entity, "the shadow of a shade." Its theory is becoming separated from the investigations of biology and mental science, which now discuss the phenomena of life and thought, the senses and the intellect, the emotions and the will, on a groundwork of pure experience. There has arisen an intellectual product whose very existence is of the deepest significance, a "psychology" which has no longer anything to do with "soul." The soul's place in modern thought is in the metaphysics of religion, and its especial office there is that of furnishing an intellectual side to the religious doctrine of the future life.
Such are the alterations which have differenced the fundamental animistic belief in its course through successive periods of the world's culture. Yet it is evident that, notwithstanding all this profound change, the conception of the human soul is, as to its most essential nature, continuous from the philosophy of the savage thinker to that of the modern professor of theology. Its definition has remained from the first that of an animating, separable, surviving entity, the vehicle of individual personal existence. The theory of the soul is one principal part of a system of religious philosophy which unites, in an unbroken line of mental connexion, the savage fetish-worshipper and the civilized Christian. The divisions which have separated the great religions of the world into intolerant and hostile sects are for the most part superficial in comparison with the deepest of all religious schisms, that which divides Animism from Materialism."
- Primitive Culture, Sir Edward Burnett Tylor
Since I don't know of any religion which subscribes to materialism (as opposed to animism), I'm not sure how this is considered a religious schism, unless it is one between the religious and the irreligious, the latter of whom, historically speaking, have been few in number and many of whom have paid at least lip service to the prevailing animistic mindset.
(See also: Philosophical Perspectives on Behavior: From Animism to Materialism, a chapter from "The Things We Do: Using the Lessons of Bernard and Darwin to Understand the What, How, and Why of Our Behavior" by Gary Cziko, which concludes:
"The world today is divided along many lines. One of the most obvious is the line dividing the wealthy, industrialized countries of Europe, North America, and Oceania from the poorer, less industrialized countries of much of the rest of the world. Perhaps less obvious, but just as striking, is the line separating materialist (physical, natural) methodologies and beliefs of science and scientists from overwhelmingly psychic (spiritual, supernatural) or dualist methodologies and beliefs of the rest of the world’s human population. While science is now thoroughly materialistic in orientation and methodology, most individuals doubt that life, its origin, its meaning, and its experiences can be accounted for by physical properties of matter, energy, and their interaction, and hence believe in a God or gods, spirits, angels, paranormal happenings, and other supernatural entities and phenomena."
As a side note, it seems this was put online by the author of the book, who teaches at UIUC. How charitable of him.)
Someone tried to send me the following joke. Though Mrs Sng had told us a variant of it 5 years ago, I still find it funny, and so will paste it below:
Pope vs. Jews
It was too risky. So they finally picked as their representative an old man named Moishe who spent his life sweeping up after people. Being old and poor, he had less to lose, so he agreed. He asked only for one addition to the debate.
An hour later, the cardinals were all around the Pope asking him what happened. The Pope said: “First I held up three fingers to represent the Trinity. He responded by holding up one finger to remind me that there was still one God common to both our religions.
Meanwhile, the Jewish community had crowded around Moishe, amazed that this old, almost feeble-minded man had done what all their scholars had insisted was impossible! “What happened?” they asked.
“And then?” asked a woman. “I don’t know,” said Moishe. “He took out his lunch and I took out mine.”
"Channeling is just bad ventriloquism. You use another voice, but people can see your lips moving." - Penn Jillette
Random Playlist Song: Schubert - Impromptu in G-flat Major Op 90 No 3 (Paul Badura-Skoda)
I feel so gratified:
You wrote about faghagism (aka tehcoolestlatestthingsincebeinglesbo)! You are so tuned in to the nines. I'm proud of you *HUG*"
OTOH we have crap like this:
"Name: sasaa
Email: spikeemhard@yahoo.com
Where are you from?: Heaven
I mean, I don't mind getting flamed, but most flames are short, unintelligent, incomprehensible, unspecific, grammatically suspect or all of the above.
Someone: i heard from ppl in hall ex co
selection of hall residents
basically they take your appl form, which has your photo
the girls will choose the male residents
the guys will choose the girls
so if u're dying for a space in hall, put a slutty picture hahaha
Army on parade for gay recruits - "The army came out in style this weekend when it launched a recruitment drive aimed at tempting more gays, lesbians, transvestites and even transsexuals into the ranks."
Related Posts Plugin for WordPress, Blogger...
Latest posts (which you might not see on this page)
powered by Blogger | WordPress by Newwpthemes |
e11bac3b018090ca | Quantum mechanics
Historical basis of quantum theory
Basic considerations
Early developments
Read More on This Topic
principles of physical science: Rise of quantum mechanics
Planck’s radiation law
Einstein and the photoelectric effect
Test Your Knowledge
Albert Einstein, c. 1947.
All About Einstein
special composition for article 'Quantum Mechanics'If ν is less than ν0, where hν0 = W, no electrons are emitted. Not all the experimental results mentioned above were known in 1905, but all Einstein’s predictions have been verified since.
Bohr’s theory of the atom
A major contribution to the subject was made by Niels Bohr of Denmark, who applied the quantum hypothesis to atomic spectra in 1913. The spectra of light emitted by gaseous atoms had been studied extensively since the mid-19th century. It was found that radiation from gaseous atoms at low pressure consists of a set of discrete wavelengths. This is quite unlike the radiation from a solid, which is distributed over a continuous range of wavelengths. The set of discrete wavelengths from gaseous atoms is known as a line spectrum, because the radiation (light) emitted consists of a series of sharp lines. The wavelengths of the lines are characteristic of the element and may form extremely complex patterns. The simplest spectra are those of atomic hydrogen and the alkali atoms (e.g., lithium, sodium, and potassium). For hydrogen, the wavelengths λ are given by the empirical formula
special composition for article 'Quantum Mechanics'where m and n are positive integers with n > m and R, known as the Rydberg constant, has the value 1.097373157 × 107 per metre. For a given value of m, the lines for varying n form a series. The lines for m = 1, the Lyman series, lie in the ultraviolet part of the spectrum; those for m = 2, the Balmer series, lie in the visible spectrum; and those for m = 3, the Paschen series, lie in the infrared.
Bohr started with a model suggested by the New Zealand-born British physicist Ernest Rutherford. The model was based on the experiments of Hans Geiger and Ernest Marsden, who in 1909 bombarded gold atoms with massive, fast-moving alpha particles; when some of these particles were deflected backward, Rutherford concluded that the atom has a massive, charged nucleus. In Rutherford’s model, the atom resembles a miniature solar system with the nucleus acting as the Sun and the electrons as the circulating planets. Bohr made three assumptions. First, he postulated that, in contrast to classical mechanics, where an infinite number of orbits is possible, an electron can be in only one of a discrete set of orbits, which he termed stationary states. Second, he postulated that the only orbits allowed are those for which the angular momentum of the electron is a whole number n times ℏ (ℏ = h/2π). Third, Bohr assumed that Newton’s laws of motion, so successful in calculating the paths of the planets around the Sun, also applied to electrons orbiting the nucleus. The force on the electron (the analogue of the gravitational force between the Sun and a planet) is the electrostatic attraction between the positively charged nucleus and the negatively charged electron. With these simple assumptions, he showed that the energy of the orbit has the form
special composition for article 'Quantum Mechanics'where E0 is a constant that may be expressed by a combination of the known constants e, me, and ℏ. While in a stationary state, the atom does not give off energy as light; however, when an electron makes a transition from a state with energy En to one with lower energy Em, a quantum of energy is radiated with frequency ν, given by the equation
special composition for article 'Quantum Mechanics' Inserting the expression for En into this equation and using the relation λν = c, where c is the speed of light, Bohr derived the formula for the wavelengths of the lines in the hydrogen spectrum, with the correct value of the Rydberg constant.
Bohr’s theory was a brilliant step forward. Its two most important features have survived in present-day quantum mechanics. They are (1) the existence of stationary, nonradiating states and (2) the relationship of radiation frequency to the energy difference between the initial and final states in a transition. Prior to Bohr, physicists had thought that the radiation frequency would be the same as the electron’s frequency of rotation in an orbit.
Scattering of X-rays
Soon scientists were faced with the fact that another form of radiation, X-rays, also exhibits both wave and particle properties. Max von Laue of Germany had shown in 1912 that crystals can be used as three-dimensional diffraction gratings for X-rays; his technique constituted the fundamental evidence for the wavelike nature of X-rays. The atoms of a crystal, which are arranged in a regular lattice, scatter the X-rays. For certain directions of scattering, all the crests of the X-rays coincide. (The scattered X-rays are said to be in phase and to give constructive interference.) For these directions, the scattered X-ray beam is very intense. Clearly, this phenomenon demonstrates wave behaviour. In fact, given the interatomic distances in the crystal and the directions of constructive interference, the wavelength of the waves can be calculated.
In 1922 the American physicist Arthur Holly Compton showed that X-rays scatter from electrons as if they are particles. Compton performed a series of experiments on the scattering of monochromatic, high-energy X-rays by graphite. He found that part of the scattered radiation had the same wavelength λ0 as the incident X-rays but that there was an additional component with a longer wavelength λ. To interpret his results, Compton regarded the X-ray photon as a particle that collides and bounces off an electron in the graphite target as though the photon and the electron were a pair of (dissimilar) billiard balls. Application of the laws of conservation of energy and momentum to the collision leads to a specific relation between the amount of energy transferred to the electron and the angle of scattering. For X-rays scattered through an angle θ, the wavelengths λ and λ0 are related by the equation
special composition for article 'Quantum Mechanics'The experimental correctness of Compton’s formula is direct evidence for the corpuscular behaviour of radiation.
Broglie’s wave hypothesis
Faced with evidence that electromagnetic radiation has both particle and wave characteristics, Louis-Victor de Broglie of France suggested a great unifying hypothesis in 1924. Broglie proposed that matter has wave as well as particle properties. He suggested that material particles can behave as waves and that their wavelength λ is related to the linear momentum p of the particle by λ = h/p.
In 1927 Clinton Davisson and Lester Germer of the United States confirmed Broglie’s hypothesis for electrons. Using a crystal of nickel, they diffracted a beam of monoenergetic electrons and showed that the wavelength of the waves is related to the momentum of the electrons by the Broglie equation. Since Davisson and Germer’s investigation, similar experiments have been performed with atoms, molecules, neutrons, protons, and many other particles. All behave like waves with the same wavelength-momentum relationship.
Basic concepts and methods
Bohr’s theory, which assumed that electrons moved in circular orbits, was extended by the German physicist Arnold Sommerfeld and others to include elliptic orbits and other refinements. Attempts were made to apply the theory to more complicated systems than the hydrogen atom. However, the ad hoc mixture of classical and quantum ideas made the theory and calculations increasingly unsatisfactory. Then, in the 12 months started in July 1925, a period of creativity without parallel in the history of physics, there appeared a series of papers by German scientists that set the subject on a firm conceptual foundation. The papers took two approaches: (1) matrix mechanics, proposed by Werner Heisenberg, Max Born, and Pascual Jordan, and (2) wave mechanics, put forward by Erwin Schrödinger. The protagonists were not always polite to each other. Heisenberg found the physical ideas of Schrödinger’s theory “disgusting,” and Schrödinger was “discouraged and repelled” by the lack of visualization in Heisenberg’s method. However, Schrödinger, not allowing his emotions to interfere with his scientific endeavours, showed that, in spite of apparent dissimilarities, the two theories are equivalent mathematically. The present discussion follows Schrödinger’s wave mechanics because it is less abstract and easier to understand than Heisenberg’s matrix mechanics.
Schrödinger’s wave mechanics
Schrödinger expressed Broglie’s hypothesis concerning the wave behaviour of matter in a mathematical form that is adaptable to a variety of physical problems without additional arbitrary assumptions. He was guided by a mathematical formulation of optics, in which the straight-line propagation of light rays can be derived from wave motion when the wavelength is small compared to the dimensions of the apparatus employed. In the same way, Schrödinger set out to find a wave equation for matter that would give particle-like propagation when the wavelength becomes comparatively small. According to classical mechanics, if a particle of mass me is subjected to a force such that its potential energy is V(xyz) at position xyz, then the sum of V(xyz) and the kinetic energy p2/2me is equal to a constant, the total energy E of the particle. Thus,
special composition for article 'Quantum Mechanics'
It is assumed that the particle is bound—i.e., confined by the potential to a certain region in space because its energy E is insufficient for it to escape. Since the potential varies with position, two other quantities do also: the momentum and, hence, by extension from the Broglie relation, the wavelength of the wave. Postulating a wave function Ψ(xyz) that varies with position, Schrödinger replaced p in the above energy equation with a differential operator that embodied the Broglie relation. He then showed that Ψ satisfies the partial differential equation
special composition for article 'Quantum Mechanics'
This is the (time-independent) Schrödinger wave equation, which established quantum mechanics in a widely applicable form. An important advantage of Schrödinger’s theory is that no further arbitrary quantum conditions need be postulated. The required quantum results follow from certain reasonable restrictions placed on the wave function—for example, that it should not become infinitely large at large distances from the centre of the potential.
Schrödinger applied his equation to the hydrogen atom, for which the potential function, given by classical electrostatics, is proportional to −e2/r, where −e is the charge on the electron. The nucleus (a proton of charge e) is situated at the origin, and r is the distance from the origin to the position of the electron. Schrödinger solved the equation for this particular potential with straightforward, though not elementary, mathematics. Only certain discrete values of E lead to acceptable functions Ψ. These functions are characterized by a trio of integers n, l, m, termed quantum numbers. The values of E depend only on the integers n (1, 2, 3, etc.) and are identical with those given by the Bohr theory. The quantum numbers l and m are related to the angular momentum of the electron; (l(l + 1))ℏ is the magnitude of the angular momentum, and mℏ is its component along some physical direction.
The square of the wave function, Ψ2, has a physical interpretation. Schrödinger originally supposed that the electron was spread out in space and that its density at point x, y, z was given by the value of Ψ2 at that point. Almost immediately Born proposed what is now the accepted interpretation—namely, that Ψ2 gives the probability of finding the electron at xyz. The distinction between the two interpretations is important. If Ψ2 is small at a particular position, the original interpretation implies that a small fraction of an electron will always be detected there. In Born’s interpretation, nothing will be detected there most of the time, but, when something is observed, it will be a whole electron. Thus, the concept of the electron as a point particle moving in a well-defined path around the nucleus is replaced in wave mechanics by clouds that describe the probable locations of electrons in different states.
Electron spin and antiparticles
In 1928 the English physicist Paul A.M. Dirac produced a wave equation for the electron that combined relativity with quantum mechanics. Schrödinger’s wave equation does not satisfy the requirements of the special theory of relativity because it is based on a nonrelativistic expression for the kinetic energy (p2/2me). Dirac showed that an electron has an additional quantum number ms. Unlike the first three quantum numbers, ms is not a whole integer and can have only the values +1/2 and −1/2. It corresponds to an additional form of angular momentum ascribed to a spinning motion. (The angular momentum mentioned above is due to the orbital motion of the electron, not its spin.) The concept of spin angular momentum was introduced in 1925 by Samuel A. Goudsmit and George E. Uhlenbeck, two graduate students at the University of Leiden, Neth., to explain the magnetic moment measurements made by Otto Stern and Walther Gerlach of Germany several years earlier. The magnetic moment of a particle is closely related to its angular momentum; if the angular momentum is zero, so is the magnetic moment. Yet Stern and Gerlach had observed a magnetic moment for electrons in silver atoms, which were known to have zero orbital angular momentum. Goudsmit and Uhlenbeck proposed that the observed magnetic moment was attributable to spin angular momentum.
The electron-spin hypothesis not only provided an explanation for the observed magnetic moment but also accounted for many other effects in atomic spectroscopy, including changes in spectral lines in the presence of a magnetic field (Zeeman effect), doublet lines in alkali spectra, and fine structure (close doublets and triplets) in the hydrogen spectrum.
The Dirac equation also predicted additional states of the electron that had not yet been observed. Experimental confirmation was provided in 1932 by the discovery of the positron by the American physicist Carl David Anderson. Every particle described by the Dirac equation has to have a corresponding antiparticle, which differs only in charge. The positron is just such an antiparticle of the negatively charged electron, having the same mass as the latter but a positive charge.
Identical particles and multielectron atoms
Because electrons are identical to (i.e., indistinguishable from) each other, the wave function of an atom with more than one electron must satisfy special conditions. The problem of identical particles does not arise in classical physics, where the objects are large-scale and can always be distinguished, at least in principle. There is no way, however, to differentiate two electrons in the same atom, and the form of the wave function must reflect this fact. The overall wave function Ψ of a system of identical particles depends on the coordinates of all the particles. If the coordinates of two of the particles are interchanged, the wave function must remain unaltered or, at most, undergo a change of sign; the change of sign is permitted because it is Ψ2 that occurs in the physical interpretation of the wave function. If the sign of Ψ remains unchanged, the wave function is said to be symmetric with respect to interchange; if the sign changes, the function is antisymmetric.
The symmetry of the wave function for identical particles is closely related to the spin of the particles. In quantum field theory (see below Quantum electrodynamics), it can be shown that particles with half-integral spin (1/2, 3/2, etc.) have antisymmetric wave functions. They are called fermions after the Italian-born physicist Enrico Fermi. Examples of fermions are electrons, protons, and neutrons, all of which have spin 1/2. Particles with zero or integral spin (e.g., mesons, photons) have symmetric wave functions and are called bosons after the Indian mathematician and physicist Satyendra Nath Bose, who first applied the ideas of symmetry to photons in 1924–25.
The requirement of antisymmetric wave functions for fermions leads to a fundamental result, known as the exclusion principle, first proposed in 1925 by the Austrian physicist Wolfgang Pauli. The exclusion principle states that two fermions in the same system cannot be in the same quantum state. If they were, interchanging the two sets of coordinates would not change the wave function at all, which contradicts the result that the wave function must change sign. Thus, two electrons in the same atom cannot have an identical set of values for the four quantum numbers n, l, m, ms. The exclusion principle forms the basis of many properties of matter, including the periodic classification of the elements, the nature of chemical bonds, and the behaviour of electrons in solids; the last determines in turn whether a solid is a metal, an insulator, or a semiconductor (see atom; matter).
The Schrödinger equation cannot be solved precisely for atoms with more than one electron. The principles of the calculation are well understood, but the problems are complicated by the number of particles and the variety of forces involved. The forces include the electrostatic forces between the nucleus and the electrons and between the electrons themselves, as well as weaker magnetic forces arising from the spin and orbital motions of the electrons. Despite these difficulties, approximation methods introduced by the English physicist Douglas R. Hartree, the Russian physicist Vladimir Fock, and others in the 1920s and 1930s have achieved considerable success. Such schemes start by assuming that each electron moves independently in an average electric field because of the nucleus and the other electrons; i.e., correlations between the positions of the electrons are ignored. Each electron has its own wave function, called an orbital. The overall wave function for all the electrons in the atom satisfies the exclusion principle. Corrections to the calculated energies are then made, which depend on the strengths of the electron-electron correlations and the magnetic forces.
Time-dependent Schrödinger equation
At the same time that Schrödinger proposed his time-independent equation to describe the stationary states, he also proposed a time-dependent equation to describe how a system changes from one state to another. By replacing the energy E in Schrödinger’s equation with a time-derivative operator, he generalized his wave equation to determine the time variation of the wave function as well as its spatial variation. The time-dependent Schrödinger equation reads
special composition for article 'Quantum Mechanics': Schrodinger equationThe quantity i is the square root of −1. The function Ψ varies with time t as well as with position xyz. For a system with constant energy, E, Ψ has the form
special composition for article 'Quantum Mechanics'where exp stands for the exponential function, and the time-dependent Schrödinger equation reduces to the time-independent form.
The probability of a transition between one atomic stationary state and some other state can be calculated with the aid of the time-dependent Schrödinger equation. For example, an atom may change spontaneously from one state to another state with less energy, emitting the difference in energy as a photon with a frequency given by the Bohr relation. If electromagnetic radiation is applied to a set of atoms and if the frequency of the radiation matches the energy difference between two stationary states, transitions can be stimulated. In a stimulated transition, the energy of the atom may increase—i.e., the atom may absorb a photon from the radiation—or the energy of the atom may decrease, with the emission of a photon, which adds to the energy of the radiation. Such stimulated emission processes form the basic mechanism for the operation of lasers. The probability of a transition from one state to another depends on the values of the l, m, ms quantum numbers of the initial and final states. For most values, the transition probability is effectively zero. However, for certain changes in the quantum numbers, summarized as selection rules, there is a finite probability. For example, according to one important selection rule, the l value changes by unity because photons have a spin of 1. The selection rules for radiation relate to the angular momentum properties of the stationary states. The absorbed or emitted photon has its own angular momentum, and the selection rules reflect the conservation of angular momentum between the atoms and the radiation.
The phenomenon of tunneling, which has no counterpart in classical physics, is an important consequence of quantum mechanics. Consider a particle with energy E in the inner region of a one-dimensional potential well V(x), as shown in Figure 1. (A potential well is a potential that has a lower value in a certain region of space than in the neighbouring regions.) In classical mechanics, if E < V0 (the maximum height of the potential barrier), the particle remains in the well forever; if E > V0, the particle escapes. In quantum mechanics, the situation is not so simple. The particle can escape even if its energy E is below the height of the barrier V0, although the probability of escape is small unless E is close to V0. In that case, the particle may tunnel through the potential barrier and emerge with the same energy E.
The phenomenon of tunneling has many important applications. For example, it describes a type of radioactive decay in which a nucleus emits an alpha particle (a helium nucleus). According to the quantum explanation given independently by George Gamow and by Ronald W. Gurney and Edward Condon in 1928, the alpha particle is confined before the decay by a potential of the shape shown in Figure 1. For a given nuclear species, it is possible to measure the energy E of the emitted alpha particle and the average lifetime τ of the nucleus before decay. The lifetime of the nucleus is a measure of the probability of tunneling through the barrier—the shorter the lifetime, the higher the probability. With plausible assumptions about the general form of the potential function, it is possible to calculate a relationship between τ and E that is applicable to all alpha emitters. This theory, which is borne out by experiment, shows that the probability of tunneling, and hence the value of τ, is extremely sensitive to the value of E. For all known alpha-particle emitters, the value of E varies from about 2 to 8 million electron volts, or MeV (1 MeV = 106 electron volts). Thus, the value of E varies only by a factor of 4, whereas the range of τ is from about 1011 years down to about 10−6 second, a factor of 1024. It would be difficult to account for this sensitivity of τ to the value of E by any theory other than quantum mechanical tunneling.
Axiomatic approach
Although the two Schrödinger equations form an important part of quantum mechanics, it is possible to present the subject in a more general way. Dirac gave an elegant exposition of an axiomatic approach based on observables and states in a classic textbook entitled The Principles of Quantum Mechanics. (The book, published in 1930, is still in print.) An observable is anything that can be measured—energy, position, a component of angular momentum, and so forth. Every observable has a set of states, each state being represented by an algebraic function. With each state is associated a number that gives the result of a measurement of the observable. Consider an observable with N states, denoted by ψ1, ψ2, . . . , ψN, and corresponding measurement values a1, a2, . . . , aN. A physical system—e.g., an atom in a particular state—is represented by a wave function Ψ, which can be expressed as a linear combination, or mixture, of the states of the observable. Thus, the Ψ may be written as
special composition for article 'Quantum Mechanics'For a given Ψ, the quantities c1, c2, etc., are a set of numbers that can be calculated. In general, the numbers are complex, but, in the present discussion, they are assumed to be real numbers.
The theory postulates, first, that the result of a measurement must be an a-value—i.e., a1, a2, or a3, etc. No other value is possible. Second, before the measurement is made, the probability of obtaining the value a1 is c12, and that of obtaining the value a2 is c22, and so on. If the value obtained is, say, a5, the theory asserts that after the measurement the state of the system is no longer the original Ψ but has changed to ψ5, the state corresponding to a5.
A number of consequences follow from these assertions. First, the result of a measurement cannot be predicted with certainty. Only the probability of a particular result can be predicted, even though the initial state (represented by the function Ψ) is known exactly. Second, identical measurements made on a large number of identical systems, all in the identical state Ψ, will produce different values for the measurements. This is, of course, quite contrary to classical physics and common sense, which say that the same measurement on the same object in the same state must produce the same result. Moreover, according to the theory, not only does the act of measurement change the state of the system, but it does so in an indeterminate way. Sometimes it changes the state to ψ1, sometimes to ψ2, and so forth.
There is an important exception to the above statements. Suppose that, before the measurement is made, the state Ψ happens to be one of the ψs—say, Ψ = ψ3. Then c3 = 1 and all the other cs are zero. This means that, before the measurement is made, the probability of obtaining the value a3 is unity and the probability of obtaining any other value of a is zero. In other words, in this particular case, the result of the measurement can be predicted with certainty. Moreover, after the measurement is made, the state will be ψ3, the same as it was before. Thus, in this particular case, measurement does not disturb the system. Whatever the initial state of the system, two measurements made in rapid succession (so that the change in the wave function given by the time-dependent Schrödinger equation is negligible) produce the same result.
The value of one observable can be determined by a single measurement. The value of two observables for a given system may be known at the same time, provided that the two observables have the same set of state functions ψ1, ψ2, . . . , ψN. In this case, measuring the first observable results in a state function that is one of the ψs. Because this is also a state function of the second observable, the result of measuring the latter can be predicted with certainty. Thus the values of both observables are known. (Although the ψs are the same for the two observables, the two sets of a values are, in general, different.) The two observables can be measured repeatedly in any sequence. After the first measurement, none of the measurements disturbs the system, and a unique pair of values for the two observables is obtained.
Incompatible observables
The measurement of two observables with different sets of state functions is a quite different situation. Measurement of one observable gives a certain result. The state function after the measurement is, as always, one of the states of that observable; however, it is not a state function for the second observable. Measuring the second observable disturbs the system, and the state of the system is no longer one of the states of the first observable. In general, measuring the first observable again does not produce the same result as the first time. To sum up, both quantities cannot be known at the same time, and the two observables are said to be incompatible.
A specific example of this behaviour is the measurement of the component of angular momentum along two mutually perpendicular directions. The Stern-Gerlach experiment mentioned above involved measuring the angular momentum of a silver atom in the ground state. In reconstructing this experiment, a beam of silver atoms is passed between the poles of a magnet. The poles are shaped so that the magnetic field varies greatly in strength over a very small distance (Figure 2). The apparatus determines the ms quantum number, which can be +1/2 or −1/2. No other values are obtained. Thus in this case the observable has only two states—i.e., N = 2. The inhomogeneous magnetic field produces a force on the silver atoms in a direction that depends on the spin state of the atoms. The result is shown schematically in Figure 3. A beam of silver atoms is passed through magnet A. The atoms in the state with ms = +1/2 are deflected upward and emerge as beam 1, while those with ms = −1/2 are deflected downward and emerge as beam 2. If the direction of the magnetic field is the x-axis, the apparatus measures Sx, which is the x-component of spin angular momentum. The atoms in beam 1 have Sx = +ℏ/2 while those in beam 2 have Sx = −ℏ/2. In a classical picture, these two states represent atoms spinning about the direction of the x-axis with opposite senses of rotation.
The y-component of spin angular momentum Sy also can have only the values +ℏ/2 and −ℏ/2; however, the two states of Sy are not the same as for Sx. In fact, each of the states of Sx is an equal mixture of the states for Sy, and conversely. Again, the two Sy states may be pictured as representing atoms with opposite senses of rotation about the y-axis. These classical pictures of quantum states are helpful, but only up to a certain point. For example, quantum theory says that each of the states corresponding to spin about the x-axis is a superposition of the two states with spin about the y-axis. There is no way to visualize this; it has absolutely no classical counterpart. One simply has to accept the result as a consequence of the axioms of the theory. Suppose that, as in Figure 3, the atoms in beam 1 are passed into a second magnet B, which has a magnetic field along the y-axis perpendicular to x. The atoms emerge from B and go in equal numbers through its two output channels. Classical theory says that the two magnets together have measured both the x- and y-components of spin angular momentum and that the atoms in beam 3 have Sx = +ℏ/2, Sy = +ℏ/2, while those in beam 4 have Sx = +ℏ/2, Sy = −ℏ/2. However, classical theory is wrong, because if beam 3 is put through still another magnet C, with its magnetic field along x, the atoms divide equally into beams 5 and 6 instead of emerging as a single beam 5 (as they would if they had Sx = +ℏ/2). Thus, the correct statement is that the beam entering B has Sx = +ℏ/2 and is composed of an equal mixture of the states Sy = +ℏ/2 and Sy = −ℏ/2—i.e., the x-component of angular momentum is known but the y-component is not. Correspondingly, beam 3 leaving B has Sy = +ℏ/2 and is an equal mixture of the states Sx = +ℏ/2 and Sx = −ℏ/2; the y-component of angular momentum is known but the x-component is not. The information about Sx is lost because of the disturbance caused by magnet B in the measurement of Sy.
Heisenberg uncertainty principle
The observables discussed so far have had discrete sets of experimental values. For example, the values of the energy of a bound system are always discrete, and angular momentum components have values that take the form mℏ, where m is either an integer or a half-integer, positive or negative. On the other hand, the position of a particle or the linear momentum of a free particle can take continuous values in both quantum and classical theory. The mathematics of observables with a continuous spectrum of measured values is somewhat more complicated than for the discrete case but presents no problems of principle. An observable with a continuous spectrum of measured values has an infinite number of state functions. The state function Ψ of the system is still regarded as a combination of the state functions of the observable, but the sum in equation (10) must be replaced by an integral.
Measurements can be made of position x of a particle and the x-component of its linear momentum, denoted by px. These two observables are incompatible because they have different state functions. The phenomenon of diffraction noted above illustrates the impossibility of measuring position and momentum simultaneously and precisely. If a parallel monochromatic light beam passes through a slit (Figure 4A), its intensity varies with direction, as shown in Figure 4B. The light has zero intensity in certain directions. Wave theory shows that the first zero occurs at an angle θ0, given by sin θ0 = λ/b, where λ is the wavelength of the light and b is the width of the slit. If the width of the slit is reduced, θ0 increases—i.e., the diffracted light is more spread out. Thus, θ0 measures the spread of the beam.
The experiment can be repeated with a stream of electrons instead of a beam of light. According to Broglie, electrons have wavelike properties; therefore, the beam of electrons emerging from the slit should widen and spread out like a beam of light waves. This has been observed in experiments. If the electrons have velocity u in the forward direction (i.e., the y-direction in Figure 4A), their (linear) momentum is p = meu. Consider px, the component of momentum in the x-direction. After the electrons have passed through the aperture, the spread in their directions results in an uncertainty in px by an amount
special composition for article 'Quantum Mechanics'where λ is the wavelength of the electrons and, according to the Broglie formula, equals h/p. Thus, Δpxh/b. Exactly where an electron passed through the slit is unknown; it is only certain that an electron went through somewhere. Therefore, immediately after an electron goes through, the uncertainty in its x-position is Δx ≈ b/2. Thus, the product of the uncertainties is of the order of ℏ. More exact analysis shows that the product has a lower limit, given by
special composition for article 'Quantum Mechanics': Heisenberg uncertainty principle
This is the well-known Heisenberg uncertainty principle for position and momentum. It states that there is a limit to the precision with which the position and the momentum of an object can be measured at the same time. Depending on the experimental conditions, either quantity can be measured as precisely as desired (at least in principle), but the more precisely one of the quantities is measured, the less precisely the other is known.
The uncertainty principle is significant only on the atomic scale because of the small value of h in everyday units. If the position of a macroscopic object with a mass of, say, one gram is measured with a precision of 10−6 metre, the uncertainty principle states that its velocity cannot be measured to better than about 10−25 metre per second. Such a limitation is hardly worrisome. However, if an electron is located in an atom about 10−10 metre across, the principle gives a minimum uncertainty in the velocity of about 106 metre per second.
The above reasoning leading to the uncertainty principle is based on the wave-particle duality of the electron. When Heisenberg first propounded the principle in 1927 his reasoning was based, however, on the wave-particle duality of the photon. He considered the process of measuring the position of an electron by observing it in a microscope. Diffraction effects due to the wave nature of light result in a blurring of the image; the resulting uncertainty in the position of the electron is approximately equal to the wavelength of the light. To reduce this uncertainty, it is necessary to use light of shorter wavelength—e.g., gamma rays. However, in producing an image of the electron, the gamma-ray photon bounces off the electron, giving the Compton effect (see above Early developments: Scattering of X-rays). As a result of the collision, the electron recoils in a statistically random way. The resulting uncertainty in the momentum of the electron is proportional to the momentum of the photon, which is inversely proportional to the wavelength of the photon. So it is again the case that increased precision in knowledge of the position of the electron is gained only at the expense of decreased precision in knowledge of its momentum. A detailed calculation of the process yields the same result as before (equation [12]). Heisenberg’s reasoning brings out clearly the fact that the smaller the particle being observed, the more significant is the uncertainty principle. When a large body is observed, photons still bounce off it and change its momentum, but, considered as a fraction of the initial momentum of the body, the change is insignificant.
The Schrödinger and Dirac theories give a precise value for the energy of each stationary state, but in reality the states do not have a precise energy. The only exception is in the ground (lowest energy) state. Instead, the energies of the states are spread over a small range. The spread arises from the fact that, because the electron can make a transition to another state, the initial state has a finite lifetime. The transition is a random process, and so different atoms in the same state have different lifetimes. If the mean lifetime is denoted as τ, the theory shows that the energy of the initial state has a spread of energy ΔE, given by
special composition for article 'Quantum Mechanics'
This energy spread is manifested in a spread in the frequencies of emitted radiation. Therefore, the spectral lines are not infinitely sharp. (Some experimental factors can also broaden a line, but their effects can be reduced; however, the present effect, known as natural broadening, is fundamental and cannot be reduced.) Equation (13) is another type of Heisenberg uncertainty relation; generally, if a measurement with duration τ is made of the energy in a system, the measurement disturbs the system, causing the energy to be uncertain by an amount ΔE, the magnitude of which is given by the above equation.
Quantum electrodynamics
The application of quantum theory to the interaction between electrons and radiation requires a quantum treatment of Maxwell’s field equations, which are the foundations of electromagnetism, and the relativistic theory of the electron formulated by Dirac (see above Electron spin and antiparticles). The resulting quantum field theory is known as quantum electrodynamics, or QED.
QED accounts for the behaviour and interactions of electrons, positrons, and photons. It deals with processes involving the creation of material particles from electromagnetic energy and with the converse processes in which a material particle and its antiparticle annihilate each other and produce energy. Initially the theory was beset with formidable mathematical difficulties, because the calculated values of quantities such as the charge and mass of the electron proved to be infinite. However, an ingenious set of techniques developed (in the late 1940s) by Hans Bethe, Julian S. Schwinger, Tomonaga Shin’ichirō, Richard P. Feynman, and others dealt systematically with the infinities to obtain finite values of the physical quantities. Their method is known as renormalization. The theory has provided some remarkably accurate predictions.
According to the Dirac theory, two particular states in hydrogen with different quantum numbers have the same energy. QED, however, predicts a small difference in their energies; the difference may be determined by measuring the frequency of the electromagnetic radiation that produces transitions between the two states. This effect was first measured by Willis E. Lamb, Jr., and Robert Retherford in 1947. Its physical origin lies in the interaction of the electron with the random fluctuations in the surrounding electromagnetic field. These fluctuations, which exist even in the absence of an applied field, are a quantum phenomenon. The accuracy of experiment and theory in this area may be gauged by two recent values for the separation of the two states, expressed in terms of the frequency of the radiation that produces the transitions:
Comparison of the experimental and theoretical values for the separation of two states of hydrogen.
An even more spectacular example of the success of QED is provided by the value for μe, the magnetic dipole moment of the free electron. Because the electron is spinning and has electric charge, it behaves like a tiny magnet, the strength of which is expressed by the value of μe. According to the Dirac theory, μe is exactly equal to μB = eℏ/2me, a quantity known as the Bohr magneton; however, QED predicts that μe = (1 + aB, where a is a small number, approximately 1/860. Again, the physical origin of the QED correction is the interaction of the electron with random oscillations in the surrounding electromagnetic field. The best experimental determination of μe involves measuring not the quantity itself but the small correction term μe − μB. This greatly enhances the sensitivity of the experiment. The most recent results for the value of a are
Comparison of the experimental and theoretical values of the magnetic dipole moment.
Since a itself represents a small correction term, the magnetic dipole moment of the electron is measured with an accuracy of about one part in 1011. One of the most precisely determined quantities in physics, the magnetic dipole moment of the electron can be calculated correctly from quantum theory to within about one part in 1010.
Keep Exploring Britannica
Mária Telkes.
10 Women Scientists Who Should Be Famous (or More Famous)
Read this List
Zeno’s paradox, illustrated by Achilles’ racing a tortoise.
foundations of mathematics
Read this Article
Quantum Mechanics
Take this Quiz
Read this Article
Physics and Natural Law
Take this Quiz
Albert Einstein, c. 1947.
All About Einstein
Take this Quiz
game theory
Read this Article
quantum mechanics
Read this Article
acid–base reaction
Read this Article
American astronomer Vera Rubin
Vera Rubin
Read this Article
Margaret Mead
Read this Article
Read this Article
quantum mechanics
• MLA
• APA
• Harvard
• Chicago
You have successfully emailed this.
Error when sending the email. Try again later.
Edit Mode
Quantum mechanics
Table of Contents
Tips For Editing
Thank You for Your Contribution!
Uh Oh
Email this page |
0c2eb817a171e9c5 | Latest Post
A Dark Derailment
• Atomic Vortex Theory - Kelvin You can see in the quotes below from Wiki how the truth of chaos and fluid dynamics in particle vortices was evident in 1877 but was then buried repeatedly in the next 100 years – for the wiki article to state that Kelvins Atomic Vortex Theory was ‘wrong-headed’ and replaced by Quantum physics with its innate particle wave duality paradox which no-one can solve without recourse to the aether which was made taboo or Kelvins chaos vortices. How for 100 years we are stuck with an unworkable particle physics model called QED with a Central Paradox instead of using a more realistic and natural general systems theory approach to the explanation of matter in terms of natural chaos forms. http://www.youtube.com/watch?v=QgNq3n508ec Kelvin in 1902 may not have had the computers to work on chaos theory but in 1992 at the Santa Fe Institute supercomputing and complexity modelling although seeming to toe the prescribed line re NOT articulating the newly discovered chaos law of emergence and its contradiction of the 2nd law of thermodynamics, predicting rewarming after the Big bang and not heat death. http://www.andrewhennessey.co.uk/thenewphysics.pdf At this time Nikola Tesla was attempting to print his Theory of Environmental Energy which indicated that the aether would outpour free energy if disturbed by rotating magnets/magnetic field, empirical proof of that these days coming from the spinning NASA satellites that get more energy than they appear to have been entitled to by the known (or allowed) issues in physics when they engage a gravitational slingshot around a planet and its magnetosphere. ‘Real Scientists’ today are attempting to sell us the Higgs Boson or ‘God Particle’ as the final ultimate smallest building block – homogenous, identical in every detail, reproducible in every detail and as standard as a billiard ball. Real Chaos Theory though would suggest that everything every one item in the Universe was as unique as a fingerprint, with no two identical items, all having variations to some degree, and that also the aether and its array of particles is infinitely divisible with no upper or lower limit on scale or function in any given context. Here are the wiki notes on Vortex Dynamics which give an indication of the reasonable steps in natural modelling that chaos and fluid dynamics were producing before Science with a big ‘S’ in 1938 in Copenhagen decided that a physics paradigm with an inexplicable paradox at its heart was better than anything that natural events could teach us. Vortex dynamics is a vibrant subfield of fluid dynamics, commanding attention at major scientific conferences and precipitating workshops and symposia that focus fully on the subject. A curious diversion in the history of vortex dynamics was the vortex atom theory of William Thomson, later Lord Kelvin. His basic idea was that atoms were to be represented as vortex motions in the ether. This theory predated the quantum theory by several decades and because of the scientific standing of its originator received considerable attention. Many profound insights into vortex dynamics were generated during the pursuit of this theory. Other interesting corollaries were the first counting of simple knots by P. G. Tait, today considered a pioneering effort in graph theory, topology and knot theory. Ultimately, Kelvin's vortex atom was seen to be wrong-headed but the many results in vortex dynamics that it precipitated have stood the test of time. Kelvin himself originated the notion of circulation and proved that in an inviscid fluid circulation around a material contour would be conserved. This result — singled out by Einstein as one of the most significant results of Kelvin's work[citation needed] — provided an early link between fluid dynamics and topology. The history of vortex dynamics seems particularly rich in discoveries and re-discoveries of important results, because results obtained were entirely forgotten after their discovery and then were re-discovered decades later. Thus, the integrability of the problem of three point vortices on the plane was solved in the 1877 thesis of a young Swiss applied mathematician named Walter Gröbli. In spite of having been written in Göttingen in the general circle of scientists surrounding Helmholtz and Kirchhoff, and in spite of having been mentioned in Kirchhoff's well known lectures on theoretical physics and in other major texts such as Lamb's Hydrodynamics, this solution was largely forgotten. A 1949 paper by the noted applied mathematician J. L. Synge created a brief revival, but Synge's paper was in turn forgotten. A quarter century later a 1975 paper by E. A. Novikov and a 1979 paper by H. Aref on chaotic advection finally brought this important earlier work to light. The subsequent elucidation of chaos in the four-vortex problem, and in the advection of a passive particle by three vortices, made Gröbli's work part of "modern science". Another example of this kind is the so-called "localized induction approximation" (LIA) for three-dimensional vortex filament motion, which gained favor in the mid-1960s through the work of Arms, Hama, Betchov and others, but turns out to date from the early years of the 20th century in the work of Da Rios, a gifted student of the noted Italian mathematician T. Levi-Civita. Da Rios published his results in several forms but they were never assimilated into the fluid mechanics literature of his time. In 1972 H. Hasimoto used Da Rios' "intrinsic equations" (later re-discovered independently by R. Betchov) to show how the motion of a vortex filament under LIA could be related to the non-linear Schrödinger equation. This immediately made the problem part of "modern science" since it was then realized that vortex filaments can support solitary twist waves of large amplitude. For thousands of years, knots have been used for basic purposes such as recording information, fastening and tying objects together. Over time people realized that different knots were better at different tasks, such as climbing or sailing. Knots were also regarded as having spiritual and religious symbolism in addition to their aesthetic qualities. The endless knot appears in Tibetan Buddhism, while the Borromean rings have made repeated appearances in different cultures, often symbolizing unity. The Celtic monks who created the Book of Kells lavished entire pages with intricate Celtic knotwork. Knots were studied from a mathematical viewpoint by Carl Friedrich Gauss, who in 1833 developed the Gauss linking integral for computing the linking number of two knots. His student Johann Benedict Listing, after whom Listing's knot is named, furthered their study. Trivial knots The early, significant stimulus in knot theory would arrive later with Sir William Thomson (Lord Kelvin) and his theory of vortex atoms. (Sossinsky 2002, p. 1–3) In 1867 after observing Scottish physicist Peter Tait's experiments involving smoke rings, Thomson came to the idea that atoms were knots of swirling vortices in the æther. Chemical elements would thus correspond to knots and links. Tait's experiments were inspired by a paper of Helmholtz's on vortex-rings in incompressible fluids. Thomson and Tait believed that an understanding and classification of all possible knots would explain why atoms absorb and emit light at only the discrete wavelengths that they do. For example, Thomson thought that sodium could be the Hopf link due to its two lines of spectra. (Sossinsky 2002, p. 3–10) Tait subsequently began listing unique knots in the belief that he was creating a table of elements. He formulated what are now known as the Tait conjectures on alternating knots. (The conjectures were proved in the 1990s.) Tait's knot tables were subsequently improved upon by C. N. Little and Thomas Kirkman. (Sossinsky 2002, p. 6) James Clerk Maxwell, a colleague and friend of Thomson's and Tait's, also developed a strong interest in knots. Maxwell studied Listing's work on knots. He re-interpreted Gauss' linking integral in terms of electromagnetic theory. In his formulation, the integral represented the work done by a charged particle moving along one component of the link under the influence of the magnetic field generated by an electric current along the other component. Maxwell also continued the study of smoke rings by considering three interacting rings.
The Archives
Join 262 other followers
Flickr Photos
More Photos |
d1d70a94a43b4911 | Philosophy Lexicon of Arguments
Author Item Excerpt Meta data
Books on Amazon
L 159
Def Symmetry/Weyl: a thing is symmetrical, if it can be subjected to a certain operation and it then appears as exactly the same as before.
Symmetry/Physics/Laws/Feynman: For example, if we move a machine, it will still work.
I 726
Symmetry Operations/Physics/Feynman:
Translation in space - translation in time - rotation around a fixed angle - constant speed in a straight line (Lorentz transformation) - time reversal - reflection of space - exchange of the same atoms or particles - quantum mechanical phase
Matter antimatter (charge conjugation).
I 728
Asymmetry/Scale/Scale Change/Feynman: in the case of scale changes, the physical laws are not symmetrical!
Question: will an apparatus which is re-built five times larger work in the same way? - No!
E.g. The light wavelength, e.g. emitted by sodium atoms in a container, is the same when the volume quintuples. It is not made five times longer by that
Consequently, the ratio of the wavelength to the size of the emitter changes.
E.g. Cathedral made of matches: if it were built on a real scale, it would collapse, because enlarged matches are not strong enough.
We might think that it is enough to take a larger earth (because of the same gravitation). But then it would become even worse!
I 730
Symmetry/Law/Conservation Law/Quantum Mechanics: in quantum mechanics there is a corresponding conservation law for every symmetry! This is a very profound fact.
The fact that the laws of translation are symmetrical in time means, in quantum mechanics, that the energy is conserved.
Invariance in rotation corresponds to the conservation of the angular momentum. (In quantum mechanics).
I 731
Symmetry/reflection of Space/Right/Left/Direction/Space Direction/Feynman: a clock whose every part was mirror symmetrical, would run the same way.
If this was correct, however, it would be impossible to distinguish between "right" and "left" by any physical phenomenon, just as it is impossible to define an absolute speed by a physical phenomenon.
The empirical world, of course, need not be symmetrical. We can define the direction in geography.
But it does not seem to violate the physical laws that everything is changed from right to left.
E.g. right/left: If you wanted to find out where "right" is, a good method would be to buy a screw in a hardware store. Most have legal threads. It's just a lot more likely. (Convention).
I, 732
E.g. right/left: next possibility: Light turns its polarization plane when it penetrates sugar water. So we can define "right-turning".
But not with artificially made sugar, only with that from living creatures! (>Monod, molecular structure, right-turning/left-turning).
I 733
Feynman: it looks as if the phenomena of life (with much more frequent molecules in a certain direction) allow the distinction between left/right.
But that is not the case!
The Schrödinger equation tells us that molecules rotating right and left behave the same physically. Nevertheless, there is only one direction in life!
I 734
Conservation Law: there is no preservation of the number of right-sided molecules. Once started, evolution has increased their number and we can further multiply them.
We can assume that the phenomena of life do not violate symmetry but, on the contrary, demonstrate the universal nature and the ultimate origin of all living creatures.
I 737
Mirror Symmetry: is fulfilled by the laws of: electricity, gravitation, magnetism, nuclear forces.
They cannot be used to define right/left!
But there is a violation of symmetry in nature: the weak decay (beta decay): (1954): there is a particle, a certain cobalt isotope, which decays into three π mesons, and another one that decays into two.
I 738
Def South Pole: can only be defined by cobalt isotopes: it is such that the electrons in a beta decay prefer to lead away from it.
This is the only way to explain right/left unambiguously to the Martian: he gets building instructions for a beta decay in a cooled system.
I 739
Parity/Law of Violation of Parity Conservation/Asymmetry/Symmetry/Feynman: only unsymmetrical law in nature: the violation only occurs with these very slow reactions: the particles that bear a spin (electron, neutrino, etc.) come out with a left-tending spin. The law combines the polar vector of a speed and the axial vector of a rotational momentum, stating that the rotational momentum is more likely to be opposite to the velocity than being parallel to it.
I 742
Symmetry/Nature/Feynman: where does it come from? We don't know.
R. Feynman
Vom Wesen physikalischer Gesetze München 1993
Fey I
R. Feynman
Vorlesungen über Physik I München 2001
> Counter arguments against Feynman
Ed. Martin Schulz, access date 2017-05-29 |
3c7de962ac1d5ed7 | News: Atomic orbital
Canon 1 a 2
xantox, 18 January 2009 in Gallery
Other Languages:
Get the Flash Player to see the wordTube Media Player.
1. Animation created in POV-Ray by Jos Leys. Music performed by xantox with Post Flemish Harpsichord, upper manual. []
Share This
Atomic orbital
xantox, 20 April 2008 in Gallery
Other Languages:
Time evolution of an hydrogenic (single-electron) atomic orbital with quantum numbers | 3, 2, 1 > according to the Schrödinger equation (colors represent phase). In atomic matter, electrons orbiting the nucleus do not follow any determined classical path, but exist for each quantum state within an orbital, which can be visualized as a cloud of the probabilities of observing the electron at any given location and time.
Get the Flash Player to see the wordTube Media Player.
Click image to zoom1
1. © Dean E. Dauger []
Share This
Reversible computation
xantox, 20 January 2008 in Computation
Other Languages:
A computation (from latin computare, “to count”, “to cut”), is the abstract representation of a physical process in terms of states, and transitions between states or events.
The definition of possible states and events is formulated in a computation model, such as the Turing machine or the finite automaton. For example, a Turing machine state is the complete sequence of symbols on its tape plus the head’s position and internal symbol, and an event is the motion between two successive states, defined deterministically as a combination of read, write, move left and move right elementary motions.
In order to perform a computation, a robust mapping is first established between a computation model and a physical system, meaning that states and events in the model are used to label states and events observed in the system, and that the choosen correspondence is sufficiently stable in respect to various kinds of perturbations.
The system is then prepared in an initial state and is allowed to evolve through a path of events within the space of states, until it eventually reaches a state labeled as final. The discretized dynamics of the computational space may be represented with a directed graph, where nodes are possible states of the system and edges are events transforming a state into another.
Cellular automata state transition graph for n=3 rule 249, L=15, seed 0, displaying trees rooted in attractor cycles. (© A. Wuensche, M. Lesser) Cellular automata state transition graph for n=18 rule 110 (© A. Wuensche, M. Lesser)
Irreversible computational dynamics
Click image to zoom1
Logical reversibility
A function is said reversible (from latin revertere, ‘to turn back’) if, given its output, it is always possible to determine back its input, which is the case when there is a one-to-one relationship between input and output states. If the space of states is finite, such a function is a permutation. Logical reversibility implies conservation of information.
When several input states are mapped onto the same output state, then the function is irreversible, since it is impossible by only knowing the final state to find back the initial state. In boolean algebra, NOT is reversible, while SET TO ONE is irreversible. Two-argument boolean functions like AND, OR, XOR are also irreversible, since they map 22 input states into 21 output states so that information is lost in the merging of paths, like shown in the following graph of a NAND computation, whose reverse evolution is no longer deterministic.
Irreversible NAND computation
The right side tries to depict the inverse mapping to the left side
Physical reversibility
Known laws of physics are reversible. This is the case both of classical mechanics, based on lagrangian/hamiltonian dynamics, and of standard quantum mechanics, where closed systems evolve by unitary transformations, which are bijective and invertible. As a consequence, when a physical system performs an irreversible computation, the computation model’s mapping indicates that the computing system cannot stay closed.
More precisely, since an irreversible computation reduces the space of physical information-bearing states, then their entropy must decrease by increasing the entropy of the non-information bearing states, representing the thermal part of the system.
In 1961 Landauer studied this thermodynamical argument, and proposed the following principle: if a physical system performs a logically irreversible classical computation, then it must increase the entropy of the environment with an absolute minimum of heat release of kT x ln(2) per lost bit (where k is Boltzmann’s constant and T the temperature, ie. about 3 x 10-21 joules at room temperature),2 which emphasizes two facts:
• the logical irreversibility of a computation implies the physical irreversibility of the system performing it (”information is physical”);
• logically reversible computations may be at least in principle intrinsically nondissipative (which bears a relationship with Carnot’s heat engine theorem, showing that the most efficient engines are reversible ones, and Clausius theorem, attributing zero entropy change to reversible processes).
Reversible embedding of irreversible computations
Landauer further noticed that any irreversible computation may be transformed into a reversible one by embedding it into a larger computation where no information is lost, eg. by replicating every output in the input (’sources’) and every input in the output (’sinks’).
For example, the NAND irreversible function seen above may be embedded in the following bijection, also known as Toffoli gate3 (the original function is indicated in red):
NAND embedding in a reversible Toffoli gate
The additional bits of information, like Ariadne’s threads, ensure that any computational path may be reversed: they are the garbage of the forward path and the program of the backwards path. Instead of losing them in the environment, they are kept in the controlled computational space.
Toffoli gates are universal reversible logic primitives, meaning that any reversible function may be constructed in terms of Toffoli gates. The Fredkin gate is another example of universal reversible logic primitive. It exchanges its two inputs depending on the state of a third control input, thus allowing to embed any computation into a conditional routing of paths carrying conserved signals.
Some railroad switches are reversible
Reversible computation models
The billiard-ball model, invented by Fredkin and Toffoli,4 was one of the first computation models focusing on implementation with reversible physical components. Based on the laws of classical mechanics, it is equivalent to the formalism of kinetic theory of perfect gases. The presence of moving rigid spheres at specified points are defined as 1’s, their absence as 0’s. Interactions by means of right-angle collisions allow to construct various logic primitives, like for example the following 2-input, 3-output universal gate due to Feynman,5 who also proposed with Ressler a billiard-ball version of the Fredkin gate.
Feynman gate (© CJ. Vieri, MIT)
Feynman switch gate
B detects A without affecting its path
In practice, these computing spheres would be very unreliable, as instability arising from arbitrarily small perturbations would quickly generate chaotic deviations, producing an output saturated with errors. The errors may be corrected (for example, by adding potentials to stabilize the paths), however the error correction process is itself irreversible and dissipative - since it has to erase the erroneous information. Hence, error correction appears to be the only aspect of computation defining a lower bound to energy dissipation.
A stabler approach is that of the Brownian computation model,6 where thermal noise is on the contrary allowed to freely interact with a computing system near equilibrium. Potential energy barriers define the paths of a computational space, where the system walks randomly until it eventually reaches a final state. RNA polymerase, the enzyme involved in DNA transcription, is an example of a brownian logically reversible tape-copying machine. The DNA replication process also follows a similar mechanism, but adds a logically irreversible error-correcting step.
Lecerf-Bennett reversal
The embedding method is however insufficient to build a physically reversible universal computer, since the growing amount of information needing to be replicated for each event would saturate any finite memory. Then, computation would come to an end - unless the memory would be irreversibly erased, but then dissipation would have been merely postponed, and not avoided.
This seemed to rule out the possibility of useful reversible computing machines, until a remarkable solution was found by Bennett7 (earlier work by Lecerf8 anticipated its formal method), showing that it is possible at least in principle to perform an unlimited amount of computation without any energy dissipation.
The reversible system shall compute the embedding function twice: the first time “forwards” to obtain and save the computation result, and the second time “backwards”, as a mirror-image computation of the inverse function, de-computing the first step and returning the closed system to its initial state.
M. C. Escher, Swans (1956)
M. C. Escher, Swans (1956).
All M.C. Escher works (c) 2007 The M.C. Escher Company - the Netherlands.
All rights reserved. Used by permission.
Click image to zoom
Logical irreversibility and Maxwell’s demon
In 1867 Maxwell devised a thought experiment involving a finite microscopic “demon” capable of observing the motion of individual molecules. This demon guards a small hole separating two containers, filled with gas at the same temperature. When a molecule approaches, the demon checks its speed and then opens or closes a shutter, so as slower molecules always go in one container (cooling it), and faster molecules go in the other (heating it), in apparent violation of the 2nd law of thermodynamics.
A first important step toward the solution of this controversial paradox was taken in 1929 by Szilard9 who, after avoiding dualistic traps by substituting the intelligent demon with a simple machine, suggested that proper accounting of entropy is restored in the process of measuring the molecule position. This explanation became the standard one until 1981, when Bennett showed6 that the fundamentally dissipative step is surprisingly not the measurement (which can be done reversibly) but the logically irreversible erasure of demon’s memory, to make room for new measurements.
Reversibility in quantum computation
Quantum computation takes advantage of the physical effects of superposition and entanglement, leading to a qualitatively new computation paradigm.10 In quantum mechanical computation models, all events occur by unitary transformations, so that all quantum gates are reversible.
Quantum systems are less susceptible to certains kinds of errors affecting classical computations, since their discrete spectrum prevents trajectories from becoming chaotic, so that, for example, a quantum “billiard ball model” is more reliable than its classical counterpart.
However, quantum systems are also affected by new sources of error, as a consequence of interactions with the environment, such as the loss of quantum coherence. It is possible to correct generic quantum errors up to a limit,11 so as to reconstruct an error-free quantum state, at the price of performing an irreversible quantum erasure of the erroneous quantum information.
I thank Charles H. Bennett for stimulating comments on the draft.
1. A. Wuensche, M. Lesser, “The global dynamics of cellular automata“, Ref Vol. I of the Santa Fe Institute Studies in the Sciences of Complexity. Addison-Wesley (1992) [Images of cellular automata state transition graphs]. []
2. R. Landauer, “Irreversibility and heat generation in the computing process“, IBM Journal of Res. and Dev., 5:3, 183 (1961) [Logical irreversibility, Landauer’s principle]. []
3. T. Toffoli, “Reversible computing“, Tech. Memo MIT/LCS/TM-151, Mit Lab. for Comp. Sci. (1980) [Toffoli gate, reversible automata]. []
4. E. Fredkin, T. Toffoli, “Conservative logic“, International Journal of Theoretical Physics, 21:3-4, 219-253 (1982) [Billiard ball model]. []
5. R. P. Feynman, “Feynman lectures on computation (1984-1986)”, Perseus Books (2000) []
6. C. H. Bennett, “The thermodynamics of computation - a review“, International Journal of Theoretical Physics, 21:12, 905-940 (1982) [Brownian computation model; logical irreversibility and Maxwell’s demon]. []
7. C. H. Bennett, “Logical reversibility of computation“, IBM Journal of Res. and Dev., 17:6 525 (1973). [In this paper, related to the problem of the connection between computing and heat generation explored by Landauer, Bennett devised the “save result and reverse” method and proved that any irreversible computation may be simulated reversibly]. []
8. Y. Lecerf, “Machines de Turing réversibles“, english translation by M. Frank, “Reversible Turing Machines“, Comptes Rendus Hebdomadaires des Séances de L’académie des Sciences 257:2597-2600 (1963). [In this mathematical paper, unrelated to issues of physical reversibility, Lecerf sought to design a reversible Turing machine. It is the first work proposing the method of saving the computation history and then decomputing it away, though it had initially little impact and was ‘discovered’ only much after Bennett’s results, perhaps because it was not published in english and Lecerf himself did not emphasize it. It has a minor flaw, ie the inverse of a read-write-shift quintuple is a quintuple of different sort, namely shift-read-write]. []
9. L. Szilard, “Über die Entropieverminderung in einem thermodynamischen System bei Eingriffen intelligenter Wesen“, Journal Zeitschrift für Physik, 53, 840-856 (1929); english translation “On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings” in Behavioral Science, 9:4, 301-310 (1964). []
10. D. Deutsch, “Quantum Theory, the Church-Turing Principle, and the Universal Quantum Computer“, Proc. Roy. Soc. Lond., A400, 97–117 (1985). [Foundation of the quantum model of computation, universal quantum Turing machine] []
11. A. R. Calderbank, P. W. Shor, “Good quantum error-correcting codes exist“, Phys. Rev. A 54, 1098-1105 (1996). []
Share This
Marangoni flow
xantox, 6 January 2008 in Gallery
Other Languages:
Liquid surfaces are pulled by the intermolecular forces, which are unbalanced on the boundary, producing surface tension. When liquid layers with different surface tension get in contact, these forces cause a flow, also known as Marangoni effect,1 which is also the origin of the beautiful patterns found in the ancient japanese art of Suminagashi (”floating ink”). In this image, a film of oleic acid surfactant (with surface tension 32.5 mN/m) quickly spreads spontaneously about 2.5 mm over a layer of glycerol (with surface tension 63.4 mN/m). Both Marangoni and capillary stresses cause variations in the film thickness, leading to dendritic flow patterns. The contour lines are interference fringes.
Branching Dynamics in Surfactant Driven Flow
Click image to zoom2
1. C. Marangoni, “Über die Ausbreitung der Tropfen einer Flüssigkeit auf der Oberfläche einer anderen”, Ann. Phys. Leipzig, 143:337-354 (1871). []
2. © B. J. Fischer, A. A. Darhuber, S. M. Troian, Department of Chemical Engineering, Princeton University []
Share This
Water Clouds
xantox, 17 September 2007 in Gallery
Other Languages:
© 2004 Sarah Robinson & Jean Hertzberg, University of Colorado
Click image to zoom2
The kind of convective clouds known as cumulus are produced by the vertical winds occurring in regions of warm moist air, per Archimedes principle. This rapid lifting results in adiabatic expansion and cooling, and consequent accretion of water droplets. The irregular distribution of droplets scatters sunlight geometrically in all directions, producing a bright white appearance like in snow, decaying into gray shades as per their optical thickness. Each cloud is short-lived, lasting approximately 15 minutes in average.
Share This
Classical Molecules
xantox, 9 July 2007 in Gallery
Other Languages:
Animation showing the interaction of four charges of equal mass1, two positive and two negative, in the approximation of classical electromagnetism. The particles interact via the Coulomb force, mediated by the electric field represented in yellow. A repulsive “Pauli force” of quantum mechanical origin, which becomes very large at a critical distance of about the radius of the spheres shown in the animation, keeps the charges from collapsing into the same point. Additionally, the motion of the particles is damped by a term proportional to their velocity, allowing them to “settle down” into stable (or meta-stable) states.
Get the Flash Player to see the wordTube Media Player.
When the charges are allowed to evolve from the initial state, the first thing that happens (very quickly, since the Coulomb attraction between unbalanced charges is very large) is that they pair off into dipoles. Thereafter, there is still a (much weaker) interaction between neighboring dipoles (van der Waals force). Although in principle it can be either repulsive or attractive, there is a torque that rotates the dipoles so that it is attractive, eventually bringing the two dipoles together in a bound state. This mechanism binds the molecules of some substances into a solid.
1. © 2004 MIT TEAL/Studio Physics Project, John Belcher []
Share This
E-mail It |
d03ceb0b83cda55b | Take the 2-minute tour ×
I was wondering what is known about the solution of the Schrödinger equation $$i h \frac{\partial}{\partial t} Ψ(x, t) =- \frac{h^2}{2m}\Delta Ψ(x,t)+V(x)Ψ(x, t)$$ for $t ∈ \mathbb{R}$. What sort of conditions are put on the potential $V$ to guarantee a solution and what space does a solution lie in? I could find information about the equation $$-i h \frac{\partial}{\partial t} Ψ(x, t) = \frac{h^2}{2m}\Delta Ψ(x,t)$$ for $x\in \mathbb{R}^n$ and $t>0$, but most everything I see about the previous equation I find hard to understand. Is there some reference where such issues are dealt with in a clear manner
share|improve this question
Try to find the solution in the form $Ψ(x,t)=u(x)T(t)$. – vesszabo Oct 17 '12 at 18:59
You might perhaps get more answers to this on the Physics Stack Exchange. If you want, you can flag your question for ♦ moderator attention and ask for it to be migrated there. – Ilmari Karonen Oct 17 '12 at 19:01
@IlmariKaronen As I am a maths student and am completely unaware of Physics, I was hoping for more math solutions – Vivek Oct 17 '12 at 19:20
@vesszabo I was interested in the question of existence of solution, weak or otherwise, and I was hoping for some reference which puts the problem in a proper framework and suggests the most general conditions on $V$ which would ensure solution. This paper www-m3.ma.tum.de/foswiki/pub/M3/Allgemeines/CarolineLasser/… mentions in the first paragraph that the Schrödinger equation has a global solution etc, but does not mention any source. – Vivek Oct 17 '12 at 19:51
Thanks for the link. I will think about it and try to ask one of my colleague. (He is an expert :-) ) – vesszabo Oct 17 '12 at 20:06
1 Answer 1
up vote 0 down vote accepted
Partial answer. Substituting $\Psi(x,t)=u(x)T(t)$ we obtain $$ ih\frac{T'(t)}{T(t)}=-\frac{h^2}{2m}\cdot\frac{\Delta u(x)}{u(x)}+V(x)=K, $$ $K$ is a constant. From this $$ T(t)=c_1 \exp\left(-\frac{iKt}{h}\right), $$ where $c_1$ is arbitrary constant. For $u(x)$ we get $$ \frac{h^2}{2m} \Delta u(x)-(V(x)-K)u(x)=0. $$ Without loss of generality we may assume that $\frac{h^2}{2m}=1$. This equation is time independent and has an enormous literature.
share|improve this answer
Your Answer
|
63ba086e5f4ce9b7 | You are currently browsing the tag archive for the ‘soliton resolution conjecture’ tag.
I’ve just uploaded to the arXiv my paper “A global compact attractor for high-dimensional defocusing non-linear Schrödinger equations with potential“, submitted to Dynamics of PDE. This paper continues some earlier work of myself in an attempt to understand the soliton resolution conjecture for various nonlinear dispersive equations, and in particular, nonlinear Schrödinger equations (NLS). This conjecture (which I also discussed in my third Simons lecture) asserts, roughly speaking, that any reasonable (e.g. bounded energy) solution to such equations eventually resolves into a superposition of a radiation component (which behaves like a solution to the linear Schrödinger equation) plus a finite number of “nonlinear bound states” or “solitons”. This conjecture is known in many perturbative cases (when the solution is close to a special solution, such as the vacuum state or a ground state) as well as in defocusing cases (in which no non-trivial bound states or solitons exist), but is still almost completely open in non-perturbative situations (in which the solution is large and not close to a special solution) which contain at least one bound state. In my earlier papers, I was able to show that for certain NLS models in sufficiently high dimension, one could at least say that such solutions resolved into a radiation term plus a finite number of “weakly bound” states whose evolution was essentially almost periodic (or almost periodic modulo translation symmetries). These bound states also enjoyed various additional decay and regularity properties. As a consequence of this, in five and higher dimensions (and for reasonable nonlinearities), and assuming spherical symmetry, I showed that there was a (local) compact attractor K_E for the flow: any solution with energy bounded by some given level E would eventually decouple into a radiation term, plus a state which converged to this compact attractor K_E. In that result, I did not rule out the possibility that this attractor depended on the energy E. Indeed, it is conceivable for many models that there exist nonlinear bound states of arbitrarily high energy, which would mean that K_E must increase in size as E increases to accommodate these states. (I discuss these results in a recent talk of mine.)
In my new paper, following a suggestion of Michael Weinstein, I consider the NLS equation
i u_t + \Delta u = |u|^{p-1} u + Vu
where u: {\Bbb R} \times {\Bbb R}^d \to {\Bbb C} is the solution, and V \in C^\infty_0({\Bbb R}^d) is a smooth compactly supported real potential. We make the standard assumption 1 + \frac{4}{d} < p < 1 + \frac{4}{d-2} (which is asserting that the nonlinearity is mass-supercritical and energy-subcritical). In the absence of this potential (i.e. when V=0), this is the defocusing nonlinear Schrödinger equation, which is known to have no bound states, and in fact it is known in this case that all finite energy solutions eventually scatter into a radiation state (which asymptotically resembles a solution to the linear Schrödinger equation). However, once one adds a potential (particularly one which is large and negative), both linear bound states (solutions to the linear eigenstate equation (-\Delta + V) Q = -E Q) and nonlinear bound states (solutions to the nonlinear eigenstate equation (-\Delta+V)Q = -EQ - |Q|^{p-1} Q) can appear. Thus in this case the soliton resolution conjecture predicts that solutions should resolve into a scattering state (that behaves as if the potential was not present), plus a finite number of (nonlinear) bound states. There is a fair amount of work towards this conjecture for this model in perturbative cases (when the energy is small), but the case of large energy solutions is still open.
In my new paper, I consider the large energy case, assuming spherical symmetry. For technical reasons, I also need to assume very high dimension d \geq 11. The main result is the existence of a global compact attractor K: every finite energy solution, no matter how large, eventually resolves into a scattering state and a state which converges to K. In particular, since K is bounded, all but a bounded amount of energy will be radiated off to infinity. Another corollary of this result is that the space of all nonlinear bound states for this model is compact. Intuitively, the point is that when the solution gets very large, the defocusing nonlinearity dominates any attractive aspects of the potential V, and so the solution will disperse in this case; thus one expects the only bound states to be bounded. The spherical symmetry assumption also restricts the bound states to lie near the origin, thus yielding the compactness. (It is also conceivable that the localised nature of V also restricts bound states to lie near the origin, even without the help of spherical symmetry, but I was not able to establish this rigorously.)
Read the rest of this entry »
This weekend I was (once again) in San Diego, this time for the Southern California Analysis and PDE (SCAPDE) meeting. I gave a talk on “The asymptotic behaviour of large data solutions to NLS”, which is based on two of my previous papers on what solutions to focusing nonlinear Schrödinger equations behave like as time goes to infinity. (Note that this is a specialist conference, and this talk will be a bit more technical than some of the general-audience talks that I have blogged about previously.)
Read the rest of this entry »
RSS Google+ feed
Get every new post delivered to your Inbox.
Join 5,209 other followers |
22b40d35cc3ed86d | Sign up ×
I want to solve the time-dependent Schroedinger equation:
$$ i\partial_t \psi(t) = H(t)\psi(t) $$
for matrix, time-dependent $H(t)$ and vector $\psi$.
What is an efficient way of doing this so that it efficiently scales to high-dimensional spaces?
share|improve this question
what are the valuse of b and omega for the second plot? I want to check my code to see is working or not! Thanks Jiyan. – user21640 Oct 23 '14 at 17:26
3 Answers 3
up vote 27 down vote accepted
Time-dependent case
in the time-dependent case, $[H(t),H(t')]\neq0$ in general and we need to time-order, ie, the operator taking a state from $t=0$ to $t=\tau$ is $U(0,\tau)=\mathcal{T}\exp(-i\int_0^\tau dt\, H(t))$ with $\mathcal{T}$ the time-ordering operator. In practice we just split the time interval into lots of small pieces (basically using the Baker-Campbell-Hausdorff thing).
So, consider the time-dependent Hamiltonian for a two-level system:
$$ H = \left( \begin{array}{cc} \epsilon_1 && b \cos(\omega t) \\ b\cos(\omega t) && \epsilon_2 \\ \end{array} \right) $$
i.e. two level coupled by a time-periodic driving (see here). Even this simplest possible periodically-driven system can't be solved analytically in general.
Anyway, here's a function to construct the hamiltonian:
ham[e1_, e2_, b_, omega_,
t_] := {{e1, b*Cos[omega*t]}, {b*Cos[omega*t], e2}}
and here's one to construct the propagator from some initial time to some final time, given a function to construct the Hamiltonian matrix at each point in time (and splitting the interval into $n$ slices--you should try with increasing $n$ until your results stop changing):
constructU::usage = "constructU[h,tinit,tfinal,n]";
constructU[h_, tinit_, tfinal_, n_] :=
Module[{dt = N[(tfinal - tinit)/n],
curVal = IdentityMatrix[Length@h[0]]},
Do[curVal = MatrixExp[-I*h[t]*dt].curVal, {t, tinit, tfinal - dt,
This constructs the operator $U(0,\tau)=\mathcal{T}\exp(-i\int_0^\tau dt\,H(t))$ as $$ U(0,\tau)\approx\prod_{n=0}^{N}\exp\left( -iH(ndt)dt \right) $$ with $N=\tau/dt-1$ (or its ceiling anyway). This is an approximation to the correct $U$.
And now here is how to look at the time-dependent expectation of $\sigma_z$ for different coupling strengths $b$:
ClearAll[cU, psi0];
psi0 = {1., 0};
Chop[#\[Conjugate].PauliMatrix[3].#] &@(constructU[
ham[-1., 1., b, 1., #] &, 0, upt, 100].psi0),
{upt, .01, 20, .1}
Joined -> True,
PlotRange -> {-1, 1}
{b, 0, 2}
Mathematica graphics
Alternatively, you could calculate the wavefunction at some time tfinal given the wavefunction at time tinit with this:
propPsi[h_, psi0_, tinit_, tfinal_, n_] :=
psi = psi0},
psi = MatrixExp[-I*h[t]*dt, psi], {t, tinit, tfinal - dt, dt}
which uses the form MatrixExp[-I*h*t,v]. For large sparse matrices (eg, for h a many-body Hamiltonian), this can be much faster at the cost of losing access to $U$.
share|improve this answer
Thanks a lot for all this. However as you mentioned in your previous comment, my problem is a time-dependent schrodinger equation. In this case, the Hamiltonian doesn't commute in different times, and it can't just be a simple exponential; it should be a path ordered exponential. This is the reason that I can't do this. Such this problems don't have an analytical solution, they ave to be solved numerically!! – ZKT Mar 21 '13 at 13:49
@Zahra For a time-dep H, you can simply construct the propagator from $t=0$ to some time $t=\tau$, say. I can explain how if you want. but let me know if you actually want so I don't waste my time if you insist on doing it with NDSolve--but ask yourself which is the most practical way if you have a hilbert space of dimension 20000, for instance (so you'd need to solve 20000 coupled ODEs with your approach) – acl Mar 21 '13 at 15:35
Thanks a lot for the time that you give to ask my question. I really appreciate that. I'm not insisting to solve my problem with NDSolve. It would be great if I can solve it the way you are explaining. I just thought it is not possible to solve it non-numerically. Can you please tell me how to do that? I appreciate it. – ZKT Mar 21 '13 at 15:55
@Zahra oh I see, no, what I suggest is fully numerical. You're absolutely right that such problems cannot be solved analytically in general. OK let me write it up quickly and you can see if it's useful (I routinely use this on systems with much bigger Hilbert spaces than yours, up to 20-30000) – acl Mar 21 '13 at 15:58
here you go (I had actually done this the first time, just posted only the time-independent limit because I did not realize you had a time-dependent hamiltonian). Note that the way I construct the Manipulate is not efficient because I recalculate $U$ from scratch all the time, but it's fast enough... – acl Mar 21 '13 at 16:04
Since there hasn't been any discussion of NDSOlve yet, let me point out that for a finite-dimensional Hilbert space where the Schrödinger equation is merely a first-order equation in time, it's easiest to just do this (using the two-dimensional Hamiltonian ham from acl's answer):
ham[e1_, e2_, b_, omega_,
Manipulate[Module[{ψ, sol, tMax = 20},
sol = First@NDSolve[{I D[ψ[t], t] ==
ham[-1, 1, b, 1, t] .ψ[t], ψ[0] == {1,0}}, ψ, {t, 0, tMax}];
Plot[Chop[#\[Conjugate].PauliMatrix[3].#] &@(ψ /. sol)[t],
{t, 0, tMax}, PlotRange -> {-1, 1}]
{{b, 1}, 0, 2}
I copied the parameters from acl's answer too, to show the direct comparison in the Manipulate. Here the vector $\psi$ is recognized by NDSolve as two-dimensional, so the formulation of the problem is quite concise, and we can leave the time step choice up to Mathematica instead of choosing a discretization ourselves.
share|improve this answer
In fact the original question explicitly mentioned NDSolve (I edited it to make it less localized). There's nothing wrong with NDSolve for up to a few thousand states, but the approach I gave scales much better (I use it for systems with dimensions in the tens of thousands; NDSolve seems to tank much earlier); of course the way I wrote the code it's inefficient. – acl Mar 22 '13 at 9:33
as an additional comment, this approach (using NDSolve directly) works also for cases where the "Hamiltonian" depends on the wavefunction, so that we have a set of nonlinear coupled ODEs. This kind of problem appears in various mean-field approaches to many-body systems (eg, Gutzwiller-ansatz approach to many-body dynamics of bosons, see eg eq 3 here ). I've used NDSolve for precisely this problem with up to a couple of thousand coupled ODEs; it's really not practical at those sizes, but there's no alternative (in mma) for nonlinear ODEs. – acl Mar 22 '13 at 14:10
@acl Thanks for pointing that out (I already upvoted your answer). I think it would have been better to edit the question in such a way as to retain some information on what the OP originally tried already. – Jens Mar 22 '13 at 14:19
feel free to change it, I would not object (and I imagine neither would Zahra). I thought the question as it was was way too localized (eg, it was asked about a specific Hamiltonian, defined only in hard-to read code--take a look at the original form if you haven't) and wanted to make it as general as possible so it's useful. I think that phrasing it the current way it admits as many approaches as possible. – acl Mar 22 '13 at 14:47
@acl I see your point. No worries, it's fine the way it is. – Jens Mar 22 '13 at 15:16
Frame it as a set of Linear ODEs and solve it somehow. I usually use Implicit Runge Kutta in the interaction picture.
solver[H_, a_] :=
soln = Module[{d, init, eq, vars, solargs, t, t0, tf},
d = Dimensions[H][[1]];
t0 = a[[2]];
tf = a[[3]];
t = a[[1]];
u[t_] := Table[Subscript[u, i, j][t], {i, 1, d}, {j, 1, d}];
init = (u[t0] == IdentityMatrix[d]);
eq = (I u'[t] == H.u[t]);
vars = Flatten[Table[Subscript[u, i, j], {i, 1, d}, {j, 1, d}]];
solargs = LogicalExpand[eq && init];
NDSolve[solargs, vars, a,
Method -> {"FixedStep",
Method -> {"ImplicitRungeKutta", "DifferenceOrder" -> 10}},
StartingStepSize -> tf/100, MaxSteps -> Infinity]]];
U[t_] := u[t] /. soln[[1]]
Alternatively, you could solve for the $\psi(t)$ and obtain U as $|\psi(t)\rangle\langle \psi(0)| $ and use an appropriate normalisation to preserve probability.
share|improve this answer
Your Answer
|
4d22571d24476438 | Sign up ×
The Schrödinger equation in its variants for many particle systems gives the full time evolution of the system. Likewise, the Boltzmann equation is often the starting point in classical gas dynamics.
What is the relationship, i.e. the classical limit, which connects these two first order in time equations of motions?
How does one approach this, or is there another way in which one sees the classical time evolution?
Where are these considerations relevant?
share|cite|improve this question
I have seen something very similar with mean-field equations in a lecture notes by Francois Golse. He shows the connection between Vlasov equation and the Hartree-Fock equation through just quantization. But the Boltzmann equation is not a mean-field equation... – r.g. Feb 13 '12 at 16:51
3 Answers 3
There are two differnt levels to see this connection. Formally, you can derive a Fokker-Planck equation from the Boltzmann equation and do a Wick rotation on the time variable. This can be seen as a mathematical curiosity presently.
But there is a more relevant way to recover this and is given by a formulation of the quantum Boltzmann equation. There is a beautiful Physics Report by Bassano Vacchini and Klaus Hornberger that can be downloaded here. This equation is relevant to understand the behavior of matter waves in interference experiments involving large molecules with their decoherence effects as realized by Anton Zeilinger and Markus Arndt.
When the formal limit $\hbar\rightarrow 0$ is taken, quantum Boltzmann equation reduces to ist classical counterpart.
share|cite|improve this answer
Thank you for the article. What do you mean by "presently"? Btw. I'm actually a student at the Boltzmann institute in Vienna. Fun fact about Markus Arndt: It's said that he never sleeps. As a matter of fact, it's been examined that he sends emails to his different PhD students, sometimes in 3 hour intervals, at times like 2am, 5am... Last time I saw him in front of the coffee machine, asking "Do you know where my friend Sarah (PhD Student) is?" His answer: "Working I hope!". But don't get me wrong, he is a very nice guy. :) – NikolajK Feb 14 '12 at 9:05
@NickKidman: You are welcome. By "presently" I mean that there is no consistent formulation of a Boltzmann equation in a context of stochastic processes relating it to the Schroedinger equation and so this is just a formal coincidence. Nice to know about Markus: The work he is performing is really striking. – Jon Feb 14 '12 at 9:14
Actually, your comment about the Fokker-Planck equation got me a little confused about the hierarchy of these equations. Could you add a comment on declining order, if there is one? I'd see the abstract Schrödinger equation on the top of the chain, but a relation (not limit) with that classical equation feels strange. There are also things like the Master equation, which sometimes seems to come before the Boltzmann equation even. – NikolajK Feb 14 '12 at 9:20
The idea is this (e.g. K. Huang, Statistical Mechanics): From Boltzmann equation you can derive the diffusion equation that has the from $$\partial_t\Theta=D\partial_{xx}\Theta.$$ Now, take $D$, the diffusion coefficient, arbitrary and rotate time as $t\rightarrow -it$. This gives you, in a strict formal sense, the Schroedinger equation. – Jon Feb 14 '12 at 9:33
@Jon: The relation between Boltzmann equation and diffusion equation you mention is extremely formal, and very incidental. The interpretation is that the number of particles is diffusing, the BE is not an equation describing a single particle. So the rotation to the Schrodinger equation is certainly devoid of even the usual little bit of physical content of Wick rotation. if you use a 12 dimensional BE describing pairwise correlations, it would have no relation to SE, so this is really just a low-dimensional coincidence that both describe diffusion. – Ron Maimon Feb 14 '12 at 10:48
You are probably asking if there is a limit where the Schrodinger equation for many particles interacting with a potential reproduces the Boltzmann equation for many classical particles colliding in a potential. The answer is no, because the Boltzmann equation is irreversible in time, while the Schrodinger equation is reversible. The first order BE does not have a symmetry between forward and backward in time evolution, it's not an equation for complex amplitudes. It has an entropy which constantly increases. The Schrodinger equation is completely reversible.
It should be added that this is also true of classical particle dynamics--- as Loschmidt noted, it is impossible for the reversible classical particle dynamics to ever produce exact irreversible Boltzmann evolution. But Boltzmann understood that the equation was only approximate, valid only when multiparticle correlations could be ignored. But Boltzmann also understood from physical intuition that this was the case most of the time in real gasses. So there is a sense in which it is possible to arrive at the Boltzmann equation from a statistical description of a classical gas. But it requires a truncation of the statistical description to only the function f(x,p) which describes the expected number of particles at position x and momentum p. This truncation is lossy, and it is the reason for the emergence of irreversibility.
So the more nuanced answer is that you can find a Boltzmann equation when you can truncate the statistical description into a low dimensional projection, and get the best approximate statistical evolution in this truncation.
Your intuition was probably that the SE should reduce to a BE because both describe the behavior of a bunch of particles in a statistical way. This is incorrect, because the SE is not a statistical description. Absent a measurement, which is not described by SE anyway, the SE gives you the time evolution of the complete state. There is nothing statistical going on without measurement.
The SE is also describing waves in a humongous 3N dimensional space, so that it describes all the entanglements between all the particles. To get to the BE, you need to truncate the space to just the expected number of particles at position x with momentum p. This truncation doesn't work with probability amplitudes--- only probabilities can be truncated like this. The reason is that if you have a truncated state X which internally can be one of two possible micro-states A or B, if you have probability, you can just say "the probability of X is the sum of the probability of A and the probability of B", but you can't say "the amplitude of X is the sum of the amplitude of A and the amplitude of B", because that's just wrong for quantum evolution.
So truncated partial descriptions are only for classical probabilities. So to reproduce classical Boltzmann dynamics you need to pass to a statistical description, which means density matrices, then take the classical limit of SE for density matrices, then project this probabilistic description from the 6N dimensional phase space to the 6 dimensional Boltzmann function.
The first step is the classical limit of QM, the second step is the original derivation of the Boltzmann equation from the full description of the stochastic gas in 6N dimensional phase space. You can't relate the equations in any other meaningful way.
share|cite|improve this answer
I was just wondering, since both are tool with which one could describe several gas systems microscopically, there must be a connection at least of all the expectation values. Also, I'm a bit confused about the whole irreversibility thing: In view of the second law of thermodynamics, isn't the irreversibility a good thing? Don't I want the (more) fundamental description to have this kind of property? – NikolajK Feb 15 '12 at 9:36
@Nick: yes--- in principle, if you have a gas of many atoms, and you evolve their wavefunction by SE, and you look at only the density of atoms with position x and momentum p, you should be able to reproduce a Boltzmann equation in some limit. But the limit is going to be complicated, because it is essentially two separate limits--- the classical limit (where the separation is large compared to the wavelength) plus the Boltzmann limit (where the multiparticle correlations are ignored). The second limit leads to the irreversibility, it corresponds to throwing away certain complex information. – Ron Maimon Feb 15 '12 at 14:16
@RonMainmon: One last thing: The Boltzmann equation is used in Plasma physics. But there is some wider range interaction going on, so how does that fit together with the statement? – NikolajK Feb 15 '12 at 15:12
@Nick Kidman: The essential approximation for the Boltzmann equation is that you can describe the statistics of the particles using the density at position x and momentum p, and that this density obeys a closed equation from 2-particle scattering. You can add a term which describes the interaction of each particle with an overall electrostatic field due to all the other particles, because you can extract the charge density from the position density distribution of the charged particles in the plasma, so it is also possible to write a closed equation. But this is classical dynamics always. – Ron Maimon Feb 15 '12 at 15:21
Truncated partial descriptions are definitely not only for classical systems. it is known for a long time that the Boltzmann equation can be obtained without going thrrough a classical intermediate stage, and variants with more realistic quantum collision operators must be derived in that way to give results consistent with experiment. See my own answer. – Arnold Neumaier Mar 18 '12 at 14:08
Irreversible equations such as the Boltzmann equation can be obtained rigorously as scaling limits of reversible microscopic equations such as a multiparticle Schroedinger equation.
A good entry point for studied about your question is the survey paper by
• H. Spohn, Kinetic equations from Hamiltonian dynamics: Markovian limits, Rev. Mod. Phys. 52 (1980) 569.
You can follow up with reading one of the many papers citing it obtained with author:spohn kinetic in
Less rigorous versions of the same technique are ubiquitous in nonequilibrium statistical mechanics. I recommend two nice books: the book by
• Grabert, Projection operator techniques (very thorough theoretically), and the book by
• Oettinger, Beyond equilibrium thermodynamics (much more applied).
• share|cite|improve this answer
Your Answer
|
5039ac60d0057c18 | Dismiss Notice
Join Physics Forums Today!
Approx. Solution To Quantum Harmonic Oscillator for |x| large enough
1. Apr 4, 2014 #1
Hi folks!
\Psi(x) = Ax^ne^{-m \omega x^2 / 2 \hbar}
is an approximate solution to the harmonic oscillator in one dimension
-\frac{\hbar ^2}{2m} \frac{d^2\psi}{dx^2} + \frac{1}{2}m \omega ^2 x^2 \psi = E \psi
for sufficiently large values of |x|. I thought this would be a simple matter of just plugging in the approximate solution into the harmonic oscillator equation and erase terms where large values of |x| reduces the term to 1 or 0.
However, this turned out to be harder than expected. The first thing I am wondering is whether my approach is correct.
Any help would be appreciated!
2. jcsd
3. Apr 4, 2014 #2
User Avatar
Science Advisor
Why not just use an exact eigenfunction of the 1D oscillator and take the limit of large ##|x|##? An exact eigenfunction contains a Hermite polynomial ##H_n(x)## and for large ##|x|## we have ##H_n(x) \sim x^n##.
4. Apr 5, 2014 #3
My background in mathematics is not very broad, and I have not ever worked with Hermite polynomials. Would you care to show how that limit develops? My calculus experience is quite limited, unfortunately.
I found this on wikipedia, but it seems there are other definitions of the polynomial as well.
5. Apr 5, 2014 #4
User Avatar
Science Advisor
Anders, I imagine six people will follow up with clever methods for solving the harmonic oscillator exactly, but I wanted to answer your question directly, namely how do you derive an approximate solution for large x. Approximation methods are important! And they often do not get the treatment they deserve.
First, for convenience, write the equation in dimensionless form,
-ψ'' + x2ψ = Eψ
For large x, we ask which of the three terms we can say will dominate the others. Clearly x2ψ >> Eψ, so we can discard Eψ. We are left with two terms, and that's the least number of terms you can retain and still have a nontrivial equation! So the first approximate equation is
-ψ'' + x2ψ = 0
Now orders of magnitude come into play. The familiar ones are powers xn, but exceeding all of these are exponentials exp(ax), and even larger than that is exp(ax2). So the first approximate solution is ψ = C exp(x2/2). Or C x exp(x2/2), or C xn exp(x2/2), these all work equally well. Or in fact C f(x) exp(x2/2) where f(x) is any function that's polynomially bounded (no larger than xn as |x| → ∞, for some n.)
What we do is factor out the exponential behavior by substituting ψ(x) = f(x) exp(x2/2), and put this back in the original exact equation. This gives us an exact equation for f:
- f'' + 2xf' + f = E f
and we must now start over, trying to find an approximation to this equation. This time a power of x is good enough, say f = xn. For any power of x, the first term is negligible compared to the other three, and with f = xn the (second) approximate equation is
2xf' + f = E f
giving the condition E = 2n + 1, which is the sequence of energy levels for the quantum oscillator.
(Someone will say, "Aha, but how do you know that n must be an integer?" That's because the same solution must be valid for both x > 0 and x < 0, and on general principles a bound state wavefunction in one dimension must be real.)
Last edited: Apr 5, 2014
6. Apr 6, 2014 #5
User Avatar
Staff Emeritus
Science Advisor
An approach to figuring out the asymptotic behavior of the wave function is to write it in terms of a new function [itex]W(x)[/itex] as follows:
[itex]\Psi(x) = e^{i W(x)/\hbar}[/itex]
If we let [itex]p(x) = \dfrac{d W}{dx}[/itex], then the Schrodinger's equation
[itex]-i \hbar^2/(2 m) \dfrac{d^2 \Psi}{dx^2} + V(x) \Psi = E \Psi[/itex]
[itex]p^2/(2m) - i \hbar/(2m) \dfrac{dp}{dx} + V(x) = E[/itex]
At this point, you assume that [itex]p[/itex] can be written as a power series in [itex](1/x)[/itex]:
[itex]p = \sum_j p_j x^{\alpha - j}[/itex]
Then you can solve for the coefficients and the leading power [itex]x^\alpha[/itex]
For the harmonic oscillator, with [itex]V(x) = 1/2 k x^2[/itex], this leads to the result:
[itex]p = i \sqrt{m k} x - i \hbar n/x + ...[/itex]
where [itex]...[/itex] is terms of order [itex]1/x^2[/itex] and [itex]n[/itex] is related to the energy [itex]E[/itex] through
[itex]E = (n+1/2) \hbar \sqrt{\dfrac{k}{m}}[/itex]
Going back to [itex]\Psi[/itex], this gives the asymptotic form of
[itex]\Psi = C x^n e^{- \sqrt{m k} x^2/\hbar }[/itex]
(if I haven't made a mistake).
7. Apr 8, 2014 #6
Thanks, I appreciate your help. Approximation is new to me, but I I followed your reasoning most of the way. I have a few questions, if you do not mind:
\psi '' + x^2 \psi = 0
I understand how you came to this equation, but I did not follow your reasoning after. We have to consider orders of magnitude? This can be solved exactly, yes? Where do your orders of magnitude come in? In the approximate solutions are finding?
I followed the separation of variables, and I was able to get the same expression as you, but again I am confused about the talk of orders of magnitude.
8. Apr 8, 2014 #7
User Avatar
Science Advisor
Gold Member
2017 Award
It's immediately clear that the approximate equation has (to the same order of accuracy) the approximate solution
[tex]\psi(x)=A \exp(-x^2/2)+B \exp(+x^2/2),[/tex]
[tex]\psi''(x)=x^2[1+\mathcal{O}(1/x^2)] \psi(x).[/tex]
The next step is to write, using the fact that [itex]\psi[/itex] must be square integrable,
[tex]\psi(x)=\tilde{\psi}(x) \exp(-x^2/2)[/tex]
for the exact eigenvalue problem and then make a power-series ansatz for [itex]\tilde{\psi}[/itex], which then leads to the conclusion that the power series must in fact be a polynom in order to give a square-integrable function. This leads to a condition for the eigenvalues of the energy and also gives the solution of the time-dependent Schrödinger equation. The polynomials are the Hermite polynomials (up to normalization and a phase factor). |
8caf5b9e87afc7c7 | PSICAN - Paranormal Studies and Inquiry Canada
Written by Massimo Teodorani PhD
Non Local SETI
Discussing this issue five years ago with some colleagues about SETI I extensively answered as follows.
<< Concerning quantum entanglement, it is an ascertained reality when two particles have first interacted together and then are separated even at the longest distance: in addition to Dr. Aspect’s experiment in 1982 and previous EPR (Einstein, Podolsky, Rosen) “gedanken” experiment in the fifties, what fully proves the reality of entanglement is just experiments on quantum teleportation of simple particles such as photons or electrons. Up to here everyone agrees.
But several new hypotheses postulate that this mechanism might be active not only in the micro world but also in the mesoscopic (intermediate scale) and even macroscopic domains.
Not only this. Someone (including Prof. John Wheeler with his hypothesis on so called retrocausation) is convinced that, considering just particles, some hidden link may exist between all particles in the Universe. Why? Because at the zero time (Big Bang start) particles were all connected and strictly interacting. It is not yet known which particle parameters are affected here, in addition to the spin and polarization (maybe the quark color too?), but if this hypothesis is true then, at a certain level, everything should be non-locally linked inside this universe, and possibly also between multiverses and possible other dimensions. In the sense that maybe a sort of “fossil link” might be still present now. The framework of this idea stands upon the so called “implicate order” elaborated many years ago by quantum physicist David Bohm, who best than others (including nobel prize Wolfgang Pauli) created the mathematical apparatus that describes what happens in the entanglement process, by expanding the Schrödinger equation (most important equation of quantum mechanics) with an additional parameter called “quantum potential” whose character is non-local (namely instantaneous). According to Bohm’s physics reality is constituted of two interconnected domains: a local and a non-local one. The first obeys to Newton/Einstein physics (finite light speed, etc., on which no one argues), the second obeys to another law of which quantum theory is only the tip of an iceberg. Someone (mostly philosophers) think that the second realm is just “consciousness” while the first is matter/energy.
In reality, if particles all over the universe maintain some hidden link together, this means that even the cells of our body are affected. In particular: neurons. And now I come to Dr. Thaheld hypothesis on “non-local astrobiology”. The hypothesis is that neurons (being constituted themselves by particles) are able to receive non-locally some kind of sentient information, which is then explicated to brainwaves (alpha, delta, theta). From this point on all the investigation becomes absolutely conventional, because whatever is the method for sending information, then that information is deposited inside neurons whose electric activity produce brainwaves. So you just have to look into them, first using an EEG apparatus (of very high-resolution kind, in this specific case) and then using a specific algorithm (Fourier, Karhunen-Loewe, multi-scale computational procedures, or even a simple time-series analysis) which is able to extract a signal from the background noise inside brainwaves (in fact what is of interest here is not the way in which brainwave goes but if something is deposited inside). There might be a very structured message that can be decoded by such an algorithm, so that the analysis becomes exactly identical to the one used in standard SETI. The only difference between NLSETI and SETI is that in the first case information is assumed to be received instantaneously through the quantum entanglement mechanism, and in the second case through radio (or optical) photons whose intensity decreases with the inverse of the square of distance.
Of course many detractors of this hypothesis will say that the entanglement mechanism is not able to transfer information because we acknowledge a quantum entanglement state only when we observe one of the two particles and at that moment we make the wave function linking them collapse, which is true per se, of course. But they persist not to understand that when a non-local link like quantum entanglement is established that can be used to transfer information from a quantum to a classical state (such as the neurons in the brain) in form of information in the brainwave, which we can measure indeed.
It is obvious that standard SETI is totally limited by the distance factor, while the probability to find an intelligent signal increases just with the source’s distance, but at the same time the signal amplitude diminishes exponentially as well: here is the trap. We have tried to increase radiotelescope aperture or acquisition mode (such as the recent Square Kilometer Array technique, for instance), receiver sensitivity, amplifier power, number of channels that can be detected simultaneously (up to one billion, nowadays) through a multi-channel spectrum analyzer, power of the algorithm of analysis, etc. After 50 years, according to the SETI Institute’s protocol (see Note * at the end about SETI PROTOCOL), the result is just discomforting.
Therefore trying the NLSETI way is not that bad and not even so expensive. Of course it has nothing to do with telepathy, because the quantitative analysis is intended to be done directly doing measures on neurons through brainwave. If something true were found (after checking all possible sources of systematic error or interference) we should have two results: a) the entanglement mechanism is extended everywhere in the universe; b) an informative sentient message could be quantitatively decoded. And, apart from the hypothesis per se, I am interested only in the quantitative/mathematical aspect of the test.
Where does the “message” come from? We can just hypothesize. Assuming that all the sources of background noise can be eliminated, we have two possibilities: a) someone particularly intelligent has sent the message through non-local means using maybe “quantum repeaters” placed somewhere in the universe (in order to avoid decoherence); b) the test subject himself has been able to connect non-locally to a sort of “server” that is placed not in the cyberspace but rather in the quantum void, where a sort of “big informative library” has been deposited since eons. Maybe everyone everywhere uploads spontaneously this kind of information there all the time without even knowing to do it. If a completely new information is found this means that the test subject has downloaded something from there and then transferred it to the neural electrical activity which then manifests the info and we reconstruct it technically.
This is, grossly speaking, the assumption of NLSETI. As you see it is of double importance: for fundamental physics and for SETI. An attempt does not hurt.
Some other considerations:
1. It is potentially possible to send an answer in quasi-real time by irradiating neural cells using a nanopulsed (with a modulated structure) Laser and/or a magnetic field. Some experiments have been already done in medical labs concerning the entanglement between two test tubes containing neural cells that have been linked previously together through a chemical substance such as an anesthetic.
2. Independently from all of this a quantum theory of the brain already exists due to mathematical physicist Roger Penrose and to neurophysiologist Stuart Hameroff. In brief this theory says that microtubules inside each neuron work in a so called “orchestrated entanglement”. They are all together (the entire ensemble of them) described by a wave function (typical equation of quantum mechanics). Normally this wave function collapses when a quantum system is observed, before which all possibilities coexist being overlapped all together. Differently from normal quantum systems, in the brain the wave function collapses spontaneously more or less every 1/40 sec: this collapse is a physical (geometrodynamic in terms of spacetime) collapse at the Planck scale level (quantum void: 10^-33 cm) where both relativity and quantum theory (due to the micro-scale involved here) are required. What then in simple terms the wave function collapse consist of? It is a “consciousness moment”. We experience it normally one million of times or so every day. Therefore the so called “consciousness” in order to manifest itself needs a neural correlate: the brain. Otherwise the wave function remains suspended and it doesn’t collapse. This means in few words that consciousness and physical matter cannot exist the one without the other (thus contradicting almost everything of religions of any kind). Prof. Hameroff found that just microtubules are the ideal physical vectors to permit entanglement inside the brain, because they are well insulated by any kind of interaction that might destroy quantum information. The quantity of consciousness depends on how much energy is inside the brain, namely how much mass of active elements (microtubules) are present able to trigger full consciousness, whose “power” is inversely proportional to the velocity of the process in terms of time taken to make so that the wave function (uniting all microtubules in the brain in a quantum coherent domain) collapses. So: according to this hypothesis, if it will be demonstrated to be true, the brain is a purely quantum system. From this (even if Penrose & Hameroff are maybe unaware of Dr. Thaheld hypothesis on NLSETI) it is not difficult to deduce that: a) if all brains are quantum systems based on the entanglement mechanism within its components they are ideal communication centers; b) whatever comes from outside affects also consciousness. But we can measure only neurons and not what a person “feels”. This doesn’t exclude that at a consciousness (and not neural) level a person can potentially acquire suddenly ideas: namely, able to connect to a universal “server” or to receive “non-local emails” directly from someone (by the way: what is exactly a genius? And how exactly one becomes a genius?). It is exactly the same mechanism of Internet: the only difference is that the mechanism here is non-local. Therefore NLSETI, being experimental and not speculative, can permit to quantitatively prove or disprove the hypothesis of connection between intelligent beings in the universe through quantum entanglement.
Concerning the "quantum mind" theory by Penrose & Hameroff, in spite of Max Tegmark's rebuttal (and other's) it is based on the fact that microtubules (inside neurons) are highly isolated by a specific gel, therefore there is sufficient time to transfer information before the overall wave function collapses due to thermal effects.
I am sure that who is largely more advanced than us did two things: a) used and manipulated the quantum vacuum (playing with virtual particles using them as the elements of a quantum computer) in the same way in which we do with a silicon chip in order to order a library of universal information of every kind; b) send deliberately information everywhere hoping that someone catch it. Of course some persons catch the message unconsciously but not scientifically: “they” know it and so they decided to leave a track in the brainwave too in order to permit us to demonstrate the mechanism scientifically. Our duty is to verify scientifically this and, if present, to decode the information. Doing many trials and using many test subjects.
Fortunately this is not science fiction. Simply I think it is time to turn the page, if we really want to attempt a real communication with alien intelligence. I have the impression that if we really want to know more on true alien intelligence we have to understand more what exactly “reality” is. But I will never tell this at my public conferences (that I have not any more time to do for now) because then the idiot of the moment would immediately tell: “Oh yeessss. We live inside a “Matrix” ”. New Agers are truly a big problem here, as if they were created on purpose by someone in order to block research in its depth (more or less like CSICOP on the opposite side, from where I just quit recently much to my pleasure).
Scientists are alone when it’s time to lift the black curtain. But they are never alone when true science is replaced by accountancy.
* SETI PROTOCOL - They say that a SETI signal is considered as such only if it is persistent in time, namely that it comes always from the same alpha-delta coordinates (which of course must be detected by many observers everywhere in the world). Correct, of course. But at the same time highly limitative. In this truly bureaucratic way of the above said protocol, everything else is excluded, not only internal/external noise or interference (as it often happens), but also possible high-proper motion sources, namely: possible sources transiting inside the solar system (in substance they want to throw away dirty water together with the baby inside). Of course it is not so difficult to span the antenna inside an error circle that is slightly bigger and bigger until we find again the same signal at a slightly different coordinate position so that we can reconstruct the orbit and track it like a comet (with full happiness of Dr. Freeman Dyson). But this is not done. Why? Because the SETV branch of SETI is not politically correct. But this is not a scientific aptitude, this is religion, or even politics. Of course I still support (but not do it any more) standard SETI: sooner or later we’ll find something. But that something will be the result of a pure selection effect: just like to find some kind of aliens of the monkey type using black glasses or a smoked filter. There is more out there, methink. >> |
30cdbfceae7010ad | Category Archives: Science & Technology
TLE Syndrome: The God Module, OOBEs, NDEs and Spiritual Experiences
I was wondering if the Out of Body Experiences, Near Death Experiences and Spiritual Experiences associated with the God Module might also happen in our sleep: particularly while dreaming. I found a great article called, “Fear & Loathing in the Temporal Lobes“, by Iona Miller. The following excerpt was especially interesting. Please check out the original article and read the whole thing.
Without further ado, here’s the fascinating excerpt . . .
Rapture of the Neurological Deep
How do we get from existential anxieties about death to intensely personal spiritual experience? Many of our spiritual notions come from the reports of the dying, or those with near-death experiences (NDEs). When the brain begins to shut down certain typical experiences appear as each of the major areas of the brain crash and billions of functional neurons heave their last gasp (McKinney).
Deeply embedded neurons in the brainstem are among the last to go. Unless the brain is physically destroyed, dying is a process. It doesn’t instantly collapse, but degrades in a somewhat predictable manner with associated characteristic phenomena.
Meanwhile, there is a regression toward the oceanic feelings of life in the womb as the process of birth gets played in reverse and we return to eternity. We journey back through earlier forms of consciousness, in a dreamy haze once the frontal lobes cease their rationalizing and abstractions.
As in dreams there are irregular bursts of neural static and discharge (Hobson) that affect the visual, affective, motor, orientation, time, and memory areas. There is no more chronological sequencing of events. Our experience of dying is synthesized holistically from the confabulation of all these elements. We may be unconscious and yet still somewhat aware with scintillating electrical surges creating their last faltering messages as they fail.
We dissociate from the body. As in deep meditation, attention is withdrawn from the extremities and external senses. We return to a simpler mode of being, the undifferentiated mind, where time seems endless, if it exists at all. As oxygen levels drop, and opiate-like endorphins are dumped into the system, the sense of peace and contentment may rise along with our spirits. Phantasmogorical images flood our awareness.
Between the dissociation from the body and the last glimpse of light, we may experience a culturally conditioned transcendence. Some might say the soul leaves the body as it journeys into the Light. Bright white light may be the melding of all colors of the visual spectrum once the visual cortex is disinhibited.
Perhaps as many as 1/3 of those coming close to death report a characeristic group of experiences. Bruce Greyson, in a paper in Varieties of Anomalous Experience (Cardena et al), lists the common elements of adult near-death experiences and aftereffects:
• Ineffability
• Hearing oneself pronounced dead
• Feelings of peace and quiet
• Hearing unusual noises
• Seeing a dark tunnel
• Being “out of the body”
• Meeting “spiritual beings”
• Experiencing a bright light as a “being of light”
• Panoramic life review
• Experiencing a realm in which all knowledge exists
• Experiencing cities of light
• Experiencing a realm of bewildered spirits
• Experiencing a “supernatural rescue”
• Sensing a border or limit
• Coming back “into the body”
• Frustration relating experiences to others
• Subtle “broadening and deepening” of life
• Elimination of fear of death
• Corroboration of events witnessed while “out of the body”
The reports of those with near-death experiences moving through a tunnel toward the light, accompanied by ancestors, deceased friends and their cultural divinities are now well known (Ring; Moody; Sabom). A minority experience emotional problems requiring psychosocial rehabilitation following NDEs, including anger and depression at having been “returned” perhaps unwillingly, broken relationships, disrupted career, alienation, post-traumatic stress disorder, “social death” (Greyson).
Gradual death is often gentle, creating its own palliative. Heavens and hells are fully immersive virtual reality constructions of our dying neural networks. But when the brain comes close to an irreversible coma on the journey towards death, the great endarkening comes before any great enlightenment. Hence many with NDEs do not report seeing the Light and may even focus on their experiences as being intensely negative in content and tone.
Unable to calm their disoriented mind, their dismal experience is largely one of panic, pain, and terror. This may be the result of toxins in the blood including carbon dioxide buildup. If we die a sudden violent death, we may miss heaven, but mercifully we will never know that.
The whole process may be greatly compounded by the release of powerful endogenous hallucinogenic DMT from the pineal gland (Strassman). In highly stressful situations, such as birth, sexual ecstasy, extreme physical distress, childbirth, near-death and death, the normal inhibitions against the production and circulation of this potent mind-bending “spirit molecule” are over-ridden. Massive DMT dumps may also create intense visions of blinding white light, ecstatic emotions, timelessness, and powerful presence.
A neurobiological model proposed by Saavedra-Aguilar and Gomez-Jeria suggests temporal-lobe dysfunction, hypoxia, psychophysical stress, and neurotransmitter changes combine to induce epileptiform discharges in the hippocampus and amygdala. They contribute to life review and and complex visual hallucinations.
When the visual cortex begins to crash (Blackmore), there is a cascade of distorted imagery, then a shift down the color spectrum toward primeval redness and impenetrable black. Maybe there is still a dull glow or scintillating pinpoints of light, like stars in some inner universe.
As the reticular activating system dies there may be a final burst of distant light, somehow familiar from the very dawn of our existence. As our last cells die, the mind is finally unwound. We have closed the circle of life and entered the Great Beyond.
© Copyright 2013
Carl Sagan in Marihuana Revisited
All images in this post are from Wikipedia and created by Yves Tanguy.
Dr. X
By Carl Sagan
In a blog entry titled, ‘Inner Space and Outer Space: Carl Sagan’s Letters to Timothy Leary (1974)‘, authored by ‘lisa‘, at the Timothy Leary Archives website, the author explains why Carl Sagan used a pseudonym for the article reproduced, below.
Disguised as “Doctor X,” to protect his reputation, he wrote this for his friend Lester Grinspoon’s book, Marihuana Reconsidered, in 1977:
At the time of his visit, Sagan was surely aware that Leary had been originally sent to prison for possession of less than a joint of cannabis.
Like Leary, Sagan also exemplified the connection between mind-expanding drugs, which increased intelligence, and scientific breakthroughs. In “The Amniotic Universe,” an article drawn from Sagan’s book Broca’s Brain, and published in the Atlantic Monthly in 1979, Sagan shows a deep and perceptive familiarity with the effects of LSD, MDA, DMT and Ketamine in his review of Stanislav Grof’s extensive and revolutionary LSD research. He writes about the effects of LSD in particular, speculating that “the Hindu mystical experience” of union with the universe “is pre-wired into us, requiring only 200 micrograms of LSD to be made manifest.” Eminent psychedelic historian Peter Stafford, author of Psychedelics Encyclopedia, placed Sagan in a list of famous people who have taken LSD. Sagan was also number 1 on io9′s recently published list of “10 Scientific and Technological Visionaries Who Experimented With Drugs.”
Without further ado, here’s the article . . .
Promontory Palace, by Yves Tanguy
Indefinite Divisibility, by Yves Tanguy
Mama, Papa is Wounded!, by Yves Tanguy
Reply to Red, by Yves Tanguy
The Death of Christian Apologetics
© Copyright 2012
Join the Forum discussion on this post
Google and YouTube: Too Big for Their Britches
First Google filtered out a lot of anti-Islamic content from its search engine. Then they enforced a “no pseudonym” policy on Google Plus — making anonymity difficult for whistle-blowers, corporate and government protest organizers and those who would risk their safety to criticize Islam. Now they’re removing videos critical of Islam from YouTube. Google/YouTube are near-monopolies who have no problem, whatsoever, throwing their weight around. These social networking giants need a lesson in social responsibility.
I don’t know about you, but censorship of freethought is unthinkable to me. How dare they silence our freedom of expression! The world NEEDS our perspective, our concerns, our voice. Google has repeatedly demonstrated a wrong-headed, accommodationist, cowardly, propensity to forfeit democratic ideals to placate vocal majorities. As the great American, Adlai Stevenson, once pointed out: “My definition of a free society is a society where it is safe to be unpopular.” Being a minority should not mean being expendable.
If you value freedom of expression and hate censorship, please let Google and YouTube know that you will not tolerate our inalienable rights taken away so cavalierly. They’re big now but they’re establishing a history of cowardice that will come back to bite them on the ass. Their continued success is not guaranteed and it will be the users they alienated that will boost the fortunes of their competitors.
Something else you can do is to download all the freethought videos you can from YouTube, before they disappear, and repost them over and over. Or you can make your own freethought videos and post them over and over. Actually, I’m not sure that’s workable but you get my gist . . . punish YouTube. We could learn a lesson from the “squeaky wheel” religious zealots and make a big noise of our own.
The Unreasonable Effectiveness of Mathematics
“The most incomprehensible thing about the universe is that it can be comprehended” ~Albert Einstein
I stumbled across this essay at the Dartmouth website. It’s copied below for your convenience. It’s written by Eugene Wigner, the 1963 Nobel Prize winner for physics. Written over 50 years ago, it is still very much germane to modern physics.
This subject is another one of those scientific curiosities that give reason to pause and to ponder ultimate sources (as I recently did in the post, “A New Argument for God?“)
Something else I really like about this article is that Wigner referred to “the laws of inanimate nature”, “knowledge of the inanimate world” (twice), “properties of the inanimate world” and “theories of the inanimate world”. With these 5 references to the inanimate, Wigner clearly takes it for granted that the laws of physics are not intended to deal with the animate parts of the universe (i.e. life and living things). This fact is central to the many posts I’ve written about self-determinism, here at, and not something that materialists like to admit.
Anyway, without further ado, here’s the essay . . .
The Unreasonable Effectiveness of Mathematics in the Natural Sciences
by Eugene Wigner
BERTRAND RUSSELL, Study of Mathematics
Is the Universe Made of Math?
The following is excerpted from a Discover Magazine interview, by Adam Frank, with cosmologist Max Tegmark. The full article can be found here.
Is the Universe Actually Made of Math?
Unconventional cosmologist Max Tegmark says mathematical formulas create reality.
Let’s talk about your effort to understand the measurement problem by positing parallel universes—or, as you call them in aggregate, the multiverse. Can you explain parallel universes?
There are four different levels of multiverse. Three of them have been proposed by other people, and I’ve added a fourth—the mathematical universe.
What is the multiverse’s first level?
The level I multiverse is simply an infinite space. The space is infinite, but it is not infinitely old—it’s only 14 billion years old, dating to our Big Bang. That’s why we can’t see all of space but only part of it—the part from which light has had time to get here so far. Light hasn’t had time to get here from everywhere. But if space goes on forever, then there must be other regions like ours—in fact, an infinite number of them. No matter how unlikely it is to have another planet just like Earth, we know that in an infinite universe it is bound to happen again.
You’re saying that we must all have doppelgängers somewhere out there due to the mathematics of infinity.
That’s pretty crazy, right? But I’m not even asking you to believe in anything weird yet. I’m not even asking you to believe in any kind of crazy new physics. All you need for a level I multiverse is an infinite universe—go far enough out and you will find another Earth with another version of yourself.
So we are just at level I. What’s the next level of the multiverse?
Level II emerges if the fundamental equations of physics, the ones that govern the behavior of the universe after the Big Bang, have more than one solution. It’s like water, which can be a solid, a liquid, or a gas. In string theory, there may be 10500 kinds or even infinitely many kinds of universes possible. Of course string theory might be wrong, but it’s perfectly plausible that whatever you replace it with will also have many solutions.
Go far enough out and you will find another Earth with another version of yourself.
Why should there be more than one kind of universe coming out of the Big Bang?
Inflationary cosmology, which is our best theory for what happened right after the Big Bang, says that a tiny chunk of space underwent a period of rapid expansion to become our universe. That became our level I multiverse. But other chunks could have inflated too, from other Big Bangs. These would be parallel universes with different kinds of physical laws, different solutions to those equations. This kind of parallel universe is very different from what happens in level I.
Well, in level I, students in different parallel universes might learn a different history from our own, but their physics would still be the same. Students in level II parallel universes learn different history and different physics. They might learn that there are 67 stable elements in the periodic table, not the 80 we have. Or they might learn there are four kinds of quarks rather than the six kinds we have in our world.
Do these level II universes inhabit different dimensions?
No, they share the same space, but we could never communicate with them because we are all being swept away from each other as space expands faster than light can travel.
OK, on to level III.
Level III comes from a radical solution to the measurement problem proposed by a physicist named Hugh Everett back in the 1950s. [Everett left physics after completing his Ph.D. at Princeton because of a lackluster response to his theories.] Everett said that every time a measurement is made, the universe splits off into parallel versions of itself. In one universe you see result A on the measuring device, but in another universe, a parallel version of you reads off result B. After the measurement, there are going to be two of you.
So there are parallel me’s in level III as well.
Sure. You are made up of quantum particles, so if they can be in two places at once, so can you. It’s a controversial idea, of course, and people love to argue about it, but this “many worlds” interpretation, as it is called, keeps the integrity of the mathematics. In Everett’s view, the wave function doesn’t collapse, and the Schrödinger equation always holds.
The level I and level II multiverses all exist in the same spatial dimensions as our own. Is this true of level III?
No. The parallel universes of level III exist in an abstract mathematical structure called Hilbert space, which can have infinite spatial dimensions. Each universe is real, but each one exists in different dimensions of this Hilbert space. The parallel universes are like different pages in a book, existing independently, simultaneously, and right next to each other. In a way all these infinite level III universes exist right here, right now.
That brings us to the last level: the level IV multiverse intimately tied up with your mathematical universe, the “crackpot idea” you were once warned against. Perhaps we should start there.
I begin with something more basic. You can call it the external reality hypothesis, which is the assumption that there is a reality out there that is independent of us. I think most physicists would agree with this idea.
The question then becomes, what is the nature of this external reality?
If a reality exists independently of us, it must be free from the language that we use to describe it. There should be no human baggage.
I see where you’re heading. Without these descriptors, we’re left with only math.
The physicist Eugene Wigner wrote a famous essay in the 1960s called “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” In that essay he asked why nature is so accurately described by mathematics. The question did not start with him. As far back as Pythagoras in the ancient Greek era, there was the idea that the universe was built on mathematics. In the 17th century Galileo eloquently wrote that nature is a “grand book” that is “written in the language of mathematics.” Then, of course, there was the great Greek philosopher Plato, who said the objects of mathematics really exist.
How does your mathematical universe hypothesis fit in?
Well, Galileo and Wigner and lots of other scientists would argue that abstract mathematics “describes” reality. Plato would say that mathematics exists somewhere out there as an ideal reality. I am working in between. I have this sort of crazy-sounding idea that the reason why mathematics is so effective at describing reality is that it is reality. That is the mathematical universe hypothesis: Mathematical things actually exist, and they are actually physical reality.
OK, but what do you mean when you say the universe is mathematics? I don’t feel like a bunch of equations. My breakfast seemed pretty solid. Most people will have a hard time accepting that their fundamental existence turns out to be the subject they hated in high school.
For most people, mathematics seems either like a sadistic form of punishment or a bag of tricks for manipulating numbers. But like physics, mathematics has evolved to ask broad questions.These days mathematicians think of their field as the study of “mathematical structures,” sets of abstract entities and the relations between them. What has happened in physics is that over the years more complicated and sophisticated mathematical structures have proved to be invaluable.
Can you give a simple example of a mathematical structure?
The integers 1, 2, 3 are a mathematical structure if you include operations like addition, subtraction, and the like. Of course, the integers are pretty simple. The mathematical structure that must be our universe would be complex enough for creatures like us to exist. Some people think string theory is the ultimate theory of the universe, the so-called theory of everything. If that turns out to be true, then string theory will be a mathematical structure complex enough so that self-awareness can exist within it.
But self-awareness includes the feeling of being alive. That seems pretty hard to capture in mathematics.
To understand the concept, you have to distinguish two ways of viewing reality. The first is from the outside, like the overview of a physicist studying its mathematical structure. The second way is the inside view of an observer living in the structure. You can think of a frog living in the landscape as the inside view and a high-flying bird surveying the landscape as the outside view. These two perspectives are connected to each other through time.
In what way does time provide a bridge between the two perspectives?
Well, all mathematical structures are abstract, immutable entities. The integers and their relations to each other, all these things exist outside of time.
Do you mean that there is no such thing as time for these structures?
Yes, from the outside. But you can have time inside some of them. The integers are not a mathematical structure that includes time, but Einstein’s beautiful theory of relativity certainly does have parts that correspond to time. Einstein’s theory has a four-dimensional mathematical structure called space-time, in which there are three dimensions of space and one dimension of time.
So the mathematical structure that is the theory of relativity has a piece that explicitly describes time or, better yet, is time. But the integers don’t have anything similar.
Yes, and the important thing to remember is that Einstein’s theory taken as a whole represents the bird’s perspective. In relativity all of time already exists. All events, including your entire life, already exist as the mathematical structure called space-time. In space-time, nothing happens or changes because it contains all time at once. From the frog’s perspective it appears that time is flowing, but that is just an illusion. The frog looks out and sees the moon in space, orbiting around Earth. But from the bird’s perspective, the moon’s orbit is a static spiral in space-time.
The frog feels time pass, but from the bird’s perspective it’s all just one eternal, unalterable mathematical structure.
That is it. If the history of our universe were a movie, the mathematical structure would correspond not to a single frame but to the entire DVD. That explains how change can be an illusion.
Of course, quantum mechanics with its notorious uncertainty principle and its Schrödinger equation will have to be part of the theory of everything.
Right. Things are more complicated than just relativity. If Einstein’s theory described all of physics, then all events would be predetermined. But thanks to quantum mechanics, it’s more interesting.
But why do some equations describe our universe so perfectly and others not so much?
Stephen Hawking once asked it this way: “What is it that breathes fire into the equations and makes a universe for them to describe?” If I am right and the cosmos is just mathematics, then no fire-breathing is required. A mathematical structure doesn’t describe a universe, it is a universe. The existence of the level IV multiverse also answers another question that has bothered people for a long time. John Wheeler put it this way: Even if we found equations that describe our universe perfectly, then why these particular equations and not others? The answer is that the other equations govern other, parallel universes, and that our universe has these particular equations because they are just statistically likely, given the distribution of mathematical structures that can support observers like us.
These are pretty broad and sweeping ideas. Are they just philosophical musings, or is there something that can actually be tested?
Well, the hypothesis predicts a lot more to reality than we thought, since every mathematical structure is another universe. Just as our sun is not the center of the galaxy but just another star, so too our universe is just another mathematical structure in a cosmos full of mathematical structures. From that we can make all kinds of predictions.
So instead of exploring just our universe, you look to all possible mathematical structures in this much bigger cosmos.
If the mathematical universe hypothesis is true, then we aren’t asking which particular mathematical equations describe all of reality anymore. Instead we have to figure out how to separate the frog’s view of the universe—our observations—from the bird’s view. Once we distinguish them we can determine whether we have uncovered the true structure of our universe and figure out which corner of the mathematical cosmos is our home.
Google Plus Requirement for Real Names
Google Plus (Google+) stirred up a controversy by deleting, wholesale, accounts created under pseudonyms instead of under real names. Google+ acknowledged their mistakes and is now formulating an official policy for naming conventions on their new social network.
There are legitimate reasons that users might need to use pseudonyms. Perhaps you don’t want your parents or ex-spouse to contact or follow you in any way. The most obvious and crucial one is anonymity for political dissidents and social activists. Without that anonymity, activism can be too dangerous to pursue. Social networks like Facebook and Twitter have changed the face of activism by facilitating historic movements for democratic reforms and human rights. The whole world needs social networks to provide this service to help keep governments honest and accountable to their citizens. If Google+ wants to be a leader in social networking, they have no business abdicating such a crucial role by requiring the display of real names for their accounts, thus making activism too dangerous. Google+ can require our real names for their internal records but they do not need to display our real names against our will.
Google+ can suspend, delete or ban accounts that violate their terms of service (TOS) whether or not those accounts use real names. The purpose of requiring real names is to provide a deterrent against violating their TOS in the first place and to have the real names of culprits to provide to authorities should their violations rise to the level of criminal activity (fraud, cyber bullying, hacking, etc.).
But the deterrent is not about the display of real names . . . it’s about the possession of real names. The deterrent is just as effective whether or not violators display their real names — as long as they know that Google+ has their real names on record.
And how will Google+ know if the name of an account is the real name unless they require proof of identity from everybody? Unless they do, many people will simply supply legitimate-looking false names. The requirement for real names is virtually unenforceable to begin with.
So the whole controversy over the requirement of real names is unnecessary as long as Google+ allows its users to hide their real names and substitute pseudonyms if they want to. Google+ only needs to possess our real names: they don’t need to display them. In theory, not only would they have the deterrent they want but they would also have the real names authorities will need to pursue criminal activity perpetrated on the Google+ network. But most importantly, Google+ will be able to follow the example set by Facebook and Twitter and provide a desperately needed service to dissidents and activists around the world. If Google+ is going to require our real names, then we should require them to shoulder their responsibility, as a social networking leader, to facilitate activism.
Let Google+ know that requiring our real names is okay as long as they don’t FORCE us to display them!
© Copyright 2011 |
641c26c8d0367a1f | PDQP/qpoly = ALL
I’ve put up a new paper. Unusually for me these days, it’s a very short and simple one (8 pages)—I should do more like this! Here’s the abstract:
We show that combining two different hypothetical enhancements to quantum computation—namely, quantum advice and non-collapsing measurements—would let a quantum computer solve any decision problem whatsoever in polynomial time, even though neither enhancement yields extravagant power by itself. This complements a related result due to Raz. The proof uses locally decodable codes.
I welcome discussion in the comments. The real purpose of this post is simply to fulfill a request by James Gallagher, in the comments of my Robin Hanson post:
The probably last chance for humanity involves science progressing, can you apply your efforts to quantum computers, which is your expertise, and stop wasting many hours of you [sic] time with this [expletive deleted]
Indeed, I just returned to Tel Aviv, for the very tail end of my sabbatical, from a weeklong visit to Google’s quantum computing group in LA. While we mourned tragedies—multiple members of the quantum computing community lost loved ones in recent weeks—it was great to be among so many friends, and great to talk and think for once about actual progress that’s happening in the world, as opposed to people saying mean things on Twitter. Skipping over its plans to build a 49-qubit chip, Google is now going straight for 72 qubits. And we now have some viable things that one can do, or try to do, with such a chip, beyond simply proving quantum supremacy—I’ll say more about that in subsequent posts.
Anyway, besides discussing this progress, the other highlight of my trip was going from LA to Santa Barbara on the back of Google physicist Sergio Boixo’s motorcycle—weaving in and out of rush-hour traffic, the tightness of my grip the only thing preventing me from flying out onto the freeway. I’m glad to have tried it once, and probably won’t be repeating it.
Update: I posted a new version of the PDQP/qpoly=ALL paper, which includes an observation about communication complexity, and which—inspired by the comments section—clarifies that when I say “all languages,” I really do mean “all languages” (even the halting problem).
88 Responses to “PDQP/qpoly = ALL”
1. Shozab Says:
Hi Scott, quick question – does there exist a class called PDQP/poly? i.e. one where the advice is classical but we still have non-collapsing measurements? If so, is there any known relationship between that class and PDQP/qpoly? I’m guessing the former would be a subset of the latter?
2. Scott Says:
Shozab #1: Yes, of course you can define PDQP/poly if you want, and of course it’s a subset of PDQP/qpoly (indeed, by my result, everything is 🙂 ). Unlike the latter, PDQP/poly does not equal ALL (a counting argument suffices to show that, indeed for any uniform complexity class with poly-size classical advice). I don’t know of anything especially interesting to say about PDQP/poly right now, though maybe someone will have something in the future.
3. Sanketh Says:
Interesting result! Why did you revert back to PDQP from naCQP?
A burning question that this paper makes even more interesting is the power of QSZK/qpoly. My conjecture is that it is not ALL (I thought about it for a week and I couldn’t prove it so…). Also, QSZK seems strictly less powerful than PDQP (Grover’s search is optimal for QSZK while you can search in cuberoot(n) time in PDQP.) (The other containment—QSZK in PDQP—also seems unlikely.)
4. Scott Says:
Sanketh #3: Remind me what naCQP is? I’m getting old and senile (for godsakes, it’s my 37th birthday in a couple days…)
I didn’t know that anyone had looked at the QSZK/qpoly question before I did two weeks ago! May I ask what your motivation was? I also suspect it’s not ALL, but couldn’t show it, even for (say) classical non-interactive perfect zero-knowledge protocols with /rpoly advice. I did manage to show that certain special kinds of protocols won’t give you ALL—-e.g. because they would give you 2-query LDCs of subexponential size, which Kerenidis and de Wolf ruled out. At the end of the day, all we’re asking for here is a computation of a fat-shattering dimension (namely, of the concept class of problems solved by QSZK/qpoly algorithms), but it’s a nontrivial computation.
I also suspect that QSZK and PDQP might be incomparable. It would be nice to prove that relative to suitable oracles.
5. Joshua Zelinsky Says:
Given the earlier results QIP/qpoly and PP/rpoly are also ALL it seems like this result is part of a general pattern that it takes surprisingly little advice to get to ALL, which makes me wonder if we are in general underestimating the strength of advice, and maybe that are general attitude that classical advice shouldn’t be able to do much is wrong.
Incidental related question: Does your result allow for a quicker proof that QIP/qpoly=ALL?
6. Sanketh Says:
naCQP is what you call PDQP in the ITCS version of *your* paper!
My motivation was as follows: Raz showed that QIP(2)/rpoly = ALL, and you and Andy Drucker showed that BQP/qpoly ⊆ QMA/poly ≠ ALL. And we know (due Watrous) that QSZK ⊆ QIP(2). Moreover, QSZK is very nice to deal with due to the complete problem—quantum state distinguishability (due Watrous). It is easy to see that there is an analogous complete problem for QSZK/qpoly in which the polynomial size quantum circuits have advice.
I also thought about simplifying my question to NISZK/rpoly but it didn’t seem any easier. Keep in mind that this was around the same time that I was studying the paper of Adam Bouland, Lijie Chen, Dhiraj Holden, Justin Thaler, and Prashant Nalini Vasudevan, so I was sufficiently convinced. I knew of Kerenidis-de Wolf but I didn’t go that far.
I mentioned this problem in John Watrous’ group meeting last summer and also mentioned it to Andy Drucker but that was the end of it.
Yes, abstractly, the question boils down to what is the fat-shattering dimension of the concept class of problems solved by QSZK/qpoly algorithms? (But that formulation doesn’t seem to give me any new information about the problem.)
My approach was to show the existence of a QIP(2)/qpoly problem (notice the “q”) that cannot be reduced to advice quantum state distinguishability by some kind of Nayak-type argument. (Maybe one can use Kerenidis-de Wolf here?)
7. Sniffnoy Says:
8. Aula Says:
Are you claiming that PDQP/qpoly is powerful enough to solve the Halting Problem? If not, maybe you should say that it’s equal to the class of all decidable languages instead of all languages.
9. Scott Says:
Aula #8: Yes, I’m absolutely claiming that PDQP/qpoly is powerful enough to solve the halting problem. And the halting problem with a HALT oracle, and … you get the idea.
Adding advice always gives you the ability to solve some undecidable problems (since now you can decide an uncountable infinity of languages, rather than only a countable infinity). It’s just that normally, you’re highly constrained in which undecidable problems you can solve (e.g., you could solve the halting problem, but only if the input machine were encoded in unary notation). The surprise about /rpoly and /qpoly is that sometimes—it’s hard to predict where—they boost a complexity class up to solving undecidable problems with no constraints.
10. Scott Says:
Joshua #5: If by classical advice you mean /poly, then it acts in basically the same way on every complexity class C. I.e., instead of all languages L for which there exists a C-machine that accepts all x∈L and rejects all x∉L, now you get all L for which there exists a C-machine M, together with an advice sequence a1,a2,…, such that M accepts all (x∈L,an) and rejects all (x∉L,an). So, the advice can always be thought of as just an auxiliary “helper” input string—one that boosts us from a countable to an uncountable set of languages (because of the enormous freedom in choosing the an‘s), but that otherwise behaves reasonably predictably. For example, if C⊆D, then we always have C/poly⊆D/poly (exercise for the reader).
But /rpoly and /qpoly? Those are like highly reactive and dangerous chemicals. For many complexity classes, adding them does nothing more than adding normal /poly advice (or, in the case of /qpoly, adding /poly advice plus a bit more computational power). But their interaction with certain classes produces an explosion that results in ALL. Furthermore, the set of classes for which this happens doesn’t obey the usual rules of complexity theory. For example, even if C⊆D, the ALL-explosion could easily happen for C and not happen for D.
It would be nice to have better principles for predicting when this ALL-explosion will happen—ones that would let us, for example, guess the answer to my and Sanketh’s question about SZK/rpoly.
11. Sniffnoy Says:
Ha! I just noticed you put the PDPP idea I’ve mentioned in your paper and also proved PDPP/rpoly=ALL. Of course I was going to ask about that if you didn’t. 🙂
(Unrelatedly, um, not sure who’s running the complexity zoo these days, but its certificate seems to have expired…)
12. Sanketh Says:
Scott #4: Another approach I fiddled with for a while was to show that QSZK/qpoly ⊆ PSPACE/poly using an argument similar to the one in your paper on de-Merlinizing quantum protocols. (I am mentioning it here because someone smarter than me might be able to make it work.)
13. Joshua Zelinsky Says:
Is there some intuition for why we should expect qpoly and rpoly to be so much more “reactive” than poly?
14. Sanketh Says:
Scott #10: To make the notion of ALL-explosion slightly more concrete consider this:
Aaronson showed that PP/rpoly = ALL, Raz showed that IP/rpoly = ALL, and now Aaronson showed that PDPP/rpoly = ALL. But notice that PSPACE/rpoly = PSPACE/poly ≠ ALL.
My guess is that the explosion only happens when one is allowed non-determinism post-advice. And rightly the complexityzoo says
So does ALL have no respect for complexity class inclusions at ALL? (Sorry.)
It is not as contradictory as it first seems. The deterministic base class in all of these examples is modified by computational non-determinism after it is modified by advice. For example, MAEXP/rpoly means M(AEXP/rpoly), while (MAEXP)/rpoly equals MAEXP/poly by a standard argument. In other words, it’s only the verifier, not the prover or post-selector, who receives the randomized or quantum advice. The prover knows a description of the advice state, but not its measured values. Modification by /rpoly does preserve class inclusions when it is applied after other changes.
(Also, from the history of the page, I am pretty sure that Scott wrote this pre-2012.)
15. Sanketh Says:
Sniffnoy #11: I am yet to see a result that only works with quantum advice (and doesn’t work with random advice.) I am still astounded by the fact that no one in classical complexity theory thought of randomized advice before quantum computing came along. Since I am already making daft conjectures—I conjecture that every ALL-explosion with quantum advice has a randomized advice analog. (Actually, this may be provable.)
16. Scott Says:
Sanketh #15: I formulated exactly that conjecture in my 2006 QMA/qpoly paper, where I called it the “Quantum Advice Hypothesis.” However, I’m not sure whether I stand behind the hypothesis: I say only that, if it’s false, then I really want to see a counterexample.
17. Scott Says:
Joshua #13: Yes, the intuition is simply that a probability distribution or quantum state on poly(n) qubits (i.e., /rpoly or /qpoly advice) takes exponentially many bits to specify even approximately. So in some sense, the advice state can always encode the entire truth table of whatever language you’re trying to decide, on all inputs of size n. The question is just whether, given an input x, your complexity class is able to read out the xth entry of the truth table by measuring the advice.
18. Sniffnoy Says:
Scott #15: Interesting! That’s actually kind of surprising…
19. Sanketh Says:
Scott #16: Sorry… I remember now, I recall reading that part of the paper and then immediately asking John Watrous if he had any intuition for QRG(1)/qpoly. (If there is anyone on the planet who has an intuition for QRG(1), it is him.)
(Professor Watrous likes to call QMA with two competing provers QRG(1), I occasionally call it QS2p although it is a horrible practice, S2 is an operator on complexity classes…)
If I may ask, why do you no longer stand behind it? Do you smell a counterexample??
20. Scott Says:
Sanketh #19: No, I don’t have a proposed counterexample in mind—it’s just that the space of complexity classes is vast, and within that space, probably most things happen somewhere unless there’s a very good reason why they can’t.
21. Scott Says:
Sniffnoy #11: I contacted Vincent Russo about the Complexity Zoo’s expired SSL certificate, and he’s on the case (though it might take the Waterloo people a few days because of a long weekend).
22. Job Says:
Non-collapsing measurements by itself sounds really powerful.
I’m surprised by your N^1/4 lower bound for Grover search under PDQP.
I have to give that some thought.
23. Greg Kuperberg Says:
I am relieved (but knowing Scott, not surprised) to see the key acknowledgment in the paper that you already get PDPP/rpoly = ALL. The paper goes onto say:
Indeed, the only reason to state these results in terms of quantum advice in the first place, is that quantum advice has been a subject of independent interest whereas randomized advice has not.
Fair enough, but this is closely related to a good reason to give the initial statement in terms of classical randomized advice instead. Namely, it is a huge cliché, and often an outright misconception, that non-determinism “in physics” is the same thing as quantum voodoo. In reality, non-determinism is already there in the Bayesian interpretation of classical probability, as well as in thermodynamics and statistical mechanics. (In fact, the physical concepts of “temperature” and “entropy” require non-determinism whether or not anything is quantum; or at least they are vastly easier to understand that way.) In my view, classical Bayesianism is a great analogy for learning and demystifying quantum probability. Results such as PDPP/rpoly = ALL = PostBPP/rpoly are also helpful for the basic point.
So I’m glad that this was mentioned, but I would be even happier if it were mentioned in page 2, and maybe in the abstract as well, rather than on page 6. Of course it’s up to Scott, though; it’s not my paper.
24. Scott Says:
Greg #23: I actually spent some time looking for a place to discuss PDPP/rpoly earlier in the paper, but couldn’t figure out how to do it without interrupting the narrative flow.
I should mention an additional issue in this case, though: PDQP has been a subject of (some) independent interest, whereas PDPP has not.
Or to put it another way: treating quantum states realistically might be a mistake, but it’s a mistake that’s tempted many, many people, which makes it of greater interest to study the mathematical consequences of that mistake, than of a mistake that nobody ever made. 🙂
25. New top story on Hacker News: PDQP/qpoly = ALL – Technical Nair Says:
[…] PDQP/qpoly = ALL 3 by weinzierl | 0 comments on Hacker News. […]
26. dlr Says:
Hi Scott, Question from a complete layperson who couldn’t understand maybe 1/3 of QC Since Democritus. When Alibaba announces it figured out how to simulate Bristlecone’s 72 Qubits using classical computing, does this have important implications for (does it change) conventional wisdom on the limits or rates of general advancement in classic computing, or is it more of a narrow, idiosyncratic trick? Thanks!
27. Scott Says:
dlr #26: Needless to say, the interesting recent Alibaba paper was a subject of conversation at the Google workshop. But in the end, I don’t think it changes the situation that much. The methods it uses—e.g., tensor network contraction, my and Lijie Chen’s recursive method for saving memory—are the same basic ones that were known before; the chief technical innovation here is to massively parallelize the computation of a single amplitude, so that you get a speedup from parallelization even if you don’t want many amplitudes. Apparently, previous simulations didn’t do this largely just because they wanted thousands of amplitudes rather than just one. The other thing to bear in mind is that, while the classical simulation side gets better, the quantum side can get better too! Apparently, there were special features of the gate set and the circuit layout that the Alibaba paper took clever advantage of, and by modifying the quantum supremacy hardware as well as the circuits run on that hardware, one could square or more the running time of the best known classical simulation, holding the circuit depth constant.
28. dlr Says:
Thank you!
29. Sanketh Says:
Scott #20: If we restrict ourselves to the case where we have access to some sort of non-determinism post-advice, then we can always replace the quantum advice by randomized advice. My almost-always-wrong intuition is that this is the only type of ALL-explosion one can have. (ALL-explosions are all about being able to extract a lot of information from a tiny amount of advice, I don’t see a way of doing this without any kind of non-determinism. Do you have any ideas?)
30. Aula Says:
Scott #9: OK, let’s say I pick a Turing machine and try to use your PDQP/qpoly algorithm to decide whether it halts or not. After encoding my chosen TM as a binary string x, it looks like I need a Boolean function f such that f(x)=1 if the TM halts and 0 if it doesn’t. How exactly am I supposed to find that f? I mean, evidently I need to already have the ability to solve the Halting Problem before I can even get started with your algorithm. Now it may well be that you understand how the quantum advice can enhance my ability to find Boolean functions for any possible decision problem, but in that case you seem to have done a very good job of excluding that understanding from the paper.
31. Trishank Says:
Scott, the reason why we say that the Halting problem is undecideable is because it leads to a contradiction. Is it reasonable then to say that PDQP/qpoly is impossible, because it leads to a contradiction? Unless, of course, one allows for the existence of contradictions (which is logically possible, I just don’t think we have seen one).
32. James Gallagher Says:
Thanks for the name check – and highlighting of ungrammatical and expletive language!
I do wonder how I ever ended up posting on your blog, I started on Lubos’ about 10 years ago, and he was great fun, and I still love the guy even though he banned me after an, admittedly, not great posting session where I dissed Bohr and Von Neumann for not really making such great contributions to Science as everyone thinks (and especially annoying for me is the attribution of the invention of the digital computer architecture to Von Neumann)
But anyway, I myself have nothing to contribute at the level you do even with this paper, and my only hope is that humanity progresses scientifically. If I stir the pot a little to help that now and then with my silly comments I would hope you understand.
33. James Gallagher Says:
“you time” is actually a thing
34. Sniffnoy Says:
That is a well-specified function right there. You just stated its definition. It’s not a computable function, but it’s a function. Which is all that’s required for advice — advice doesn’t have to be computable!
(Unrelatedly, Scott, new website bug: The blog no longer seems to be storing my session or whatever when I comment? So my name/email aren’t saved, and I can’t see my moderated comments?)
35. OOOO Says:
Does ALL include undecidable languages?
36. Scott Says:
OOOO #35: Yes. See above.
37. Scott Says:
Aula #30: The entire concept of “advice” is: something that’s fixed for each input length n, and that’s independent of the particular input x∈{0,1}n, but such that otherwise you don’t need to worry about how it was found. It’s just given to you. The crucial limitation, the thing that makes this model nevertheless interesting, is how many bits (or qubits) long the advice is allowed to be.
While obviously not “practical,” advice has been a central concept in CS theory since Karp and Lipton introduced it in 1982; see Wikipedia for more.
As one example of its applications, if you could show that breaking your cryptosystem was not in P/poly, then you’d get its security no matter how much time the NSA might have spent precomputing data (provided it could only store a reasonable-sized summary of the precomputation). This is why nonuniform adversaries are typically assumed in crypto.
In the paper, I did review and define (quantum) advice, but it’s true that I assumed enough familiarity with the decades-old concept that it wasn’t going to blow the reader’s mind. Probably I should’ve expanded on it in the blog post.
38. Scott Says:
Trishank #31: Absolutely not. There’s nothing the slightest bit contradictory about the class PDQP/qpoly—it just equals ALL, is all. 🙂
The error in your comment is the following: you don’t get a contradiction from anything whatsoever solving the halting problem; you only get a contradiction from a single Turing machine solving it.
But a PDQP/qpoly algorithm is not a single Turing machine, because it includes the advice states |ψn⟩. If you liked, you could think of it as an infinite collection of polynomial-time quantum Turing machines (enhanced by non-collapsing measurements), one for each input length n. And there’s absolutely nothing to say that such a collection couldn’t solve the halting problem. (But despite the lack of any diagonalization obstruction, it was still surprising to me that it could be done in polynomial time—read the paper if you want to see the trick.)
39. Trishank Says:
Scott #38: Thanks for your reply!
Okay, but I feel you’re handwaving this. The point is, whether a single classical TM, or an infinite number of quantum TMs, solves the HP, you arrive at a contradiction, because solving the problem means you can realize a machine that halts iff it doesn’t halt. So you have to take that implication seriously.
Also, do you mean an uncountably infinite number of these P-time quantum TMs?
40. Scott Says:
Trishank #39: No, you don’t arrive at a contradiction. You’re not listening to what I say!
In Turing’s proof, you get a contradiction because you assumed a Turing machine could solve the halting problem—and then you manipulate things to feed that TM its own code as input, in such a way that it halts it runs forever when given its own code, and runs forever if it halts when given its own code. Right?
If, by contrast, you assumed some magical oracle to solve the halting problem for Turing machines, then you do not get a contradiction, because a magical oracle is not something whose code you could feed to a Turing machine (and that the TM could then run) in Turing’s diagonalization proof. And indeed, you better not get a contradiction—because a “magical oracle to solve the halting problem” is a perfectly mathematically consistent object, is regularly considered in computability theory, and could probably even be physically realized if it weren’t for the existence of the Planck scale, which Turing’s proof knows nothing about.
If you like, any language in PDQP/qpoly is another potential example of such an oracle. The surprise, in my paper, is simply that being in PDQP/qpoly places no restriction whatsoever on which oracle it could be.
Incidentally, no, we’re not talking about an uncountable infinity of polynomial-time quantum TMs; that doesn’t even make sense, since TMs can be enumerated. We’re talking about a countable infinity, one for each input length n. But, obviously, there are uncountably many countable lists of TMs.
41. Michael Musson Says:
Scott, your comment #10 says in essence that one interesting feature of these poly advice is why they boost some complexity classes to all powerful and not others, and in particular why they sometimes boost seemingly less powerful classes and not seemingly more powerful ones.
I’m having trouble understanding the later case. Can you explain an example where the poly advice surprisingly doesn’t boost to ALL?
42. John Sidles Says:
Plenty of people — definitely including me! — are looking forward to these promised Shtetl Optimized posts.
On-line quantum appetizers in recent weeks have notably included John Martinis’ overview lecture “Quantum Computing and Quantum Supremacy” (HPC User Forum, April 16-18, 2018, video here) — Martinis’ phrase “qubit speckle” in particular is an elegant contribution to quantum language and culture.
At this same HPC User Forum, the quantum computing groups at Microsoft, Intel, D-Wave, Rigetti, and NASA generously shared their various quantum roadmaps (PDFs here).
Recent arxiv preprints that are directly relevant to the above plans for quantum supremacy demonstrations notably include the Alibaba Group’s “Classical Simulation of Intermediate-Size Quantum Circuits” (arXiv:1805.01450), Dalzell / Harrow / Koh / Placa “How many qubits are needed for quantum computational supremacy?” (arXiv:1805.05224), and Willsch / Nocon / Jin / De Raedt / Michielsen “Testing quantum fault tolerance on small systems” (arXiv:1805.05227).
Earlier this month, too, I personally enjoyed a wonderfully interesting visit to LIGO Hanford Observatory, where folks are super-busy preparing for the first quantum-squeezed runs of Advanced LIGO … I’d be pleased to contribute to post a Shtetl Optimized comment that compared and contrasted the various “quantum yanni(s)” that real-world gravitational wave observation encounter, with Scott’s account of the various “quantum laurel(s)” that planned computational supremacy demonstrations are encountering.
There’s so much top-quality / enduring-value quantum research going on in 2018, that it seems a pity to expend overmuch Shtetl Optimized bandwidth on culture-war topics.
43. aa Says:
Should a new grad student focus on quantum computing in search for tomorrow’s jobs?https://www.barrons.com/articles/intel-puts-its-own-spin-on-quantum-computing-1526922383.
44. aa Says:
Does ALL=PP/poly a possibility?
45. aa Says:
Should we be worried about quantum and more importantly ‘It has been known since the 1980s that quantum computers would be great at factoring large numbers, which is the foundation of public key cryptography’ implies someone beat Shor?
46. Scott Says:
aa #43-45:
I can’t put myself into the mindset of doing anything whatsoever “in search of tomorrow’s jobs.” Like, we all need to support ourselves, and most of us like to do things considered exciting and relevant. But there are many paths in life besides academia, and for the academic track to be worth it at all, I’d hope you’d feel free enough to choose a subject that intrinsically excites you.
No, PP/poly is not ALL, by a counting argument: there are only exp(nO(1)) possible advice strings at each n, which is far less than the exp(2n) different Boolean functions.
“Quantum factoring was known since the 80s” is such a risible error that I’m not inclined to click the link to see what else is wrong there.
47. asdf Says:
Is this thing about quantum speed limits significant?
Do the limits in the quantum realm follow from the Schroedinger equation?
48. Thomas Says:
Hi Scott, I was attending Eurocrypt in Tel Aviv and I saw you walking around, unfortunately I did not get the chance to come to you asking for an autograph 🙂 any talks you attended and found interesting?
49. Scott Says:
asdf #47: Speed limits and time/energy tradeoffs in general are an extremely interesting story. I can’t comment on this new work, claiming that QM isn’t needed for the tradeoffs, because I’m not familiar with it. The thing to look for is whether QM is getting snuck in through the back door—certainly I’ve had many disagreements with Margolus in the past where he claimed that QM wasn’t needed for something, whereas I thought it clearly, obviously WAS needed, and we never even got past the semantics (i.e., what it means to “need QM” for something, which supposedly “classical” facts are really just QM in disguise) to isolate what the core disagreement was.
Yes, the Schrödinger equation does imply a tradeoff between time and energy. If, on the other hand, you want absolute limits on processing speed or information storage density, without needing to say anything about energy, then it becomes a statement of quantum gravity rather than pure QM.
50. Scott Says:
Thomas #48: I didn’t attend nearly as many talks as I should have, but I really liked Matt Green’s keynote, with its concrete examples of terrible random number generation practices, and also the talk about lower bounds for discrete log with preprocessing in the generic group model.
51. Aaron Smith-Teller Says:
Google is now going straight for 72 qubits.
This is not a coincidence because nothing is ever a coincidence.
52. Sid Says:
Hi Scott. I apologize for going on a complete tangent here, but I was curious if you had come across and read this new book The Great Formal Machinery Works: Theories of Deduction and Computation at the Origins of the Digital Age by Jan von Plato. It feels a bit like Quantum Computing Since Democritus, but more technical, and more historical, perhaps closer in spirit to Penrose’s Road to Reality.
Just wanted to put that out there.
53. RandomOracle Says:
Scott, speaking of lower bounds in the generic group model, are there any known (quantum) lower bounds for LWE (or some of other problems that are considered good candidates for post-quantum crypto) in some reasonable oracle model?
54. asdf Says:
Scott, thanks. Another thing I wonder: where does the Schrödinger equation “come from”? I.e. is there a way to derive it from the Hilbert space formulation by adding some presumptions, and what are the presumptions? I googled around about this a little and didn’t find anything clear.
55. Joshua Zelinsky and Eve Zelinsky Says:
One more question then: PDQP is only slightly bigger than BQP but unlike BQP is unlikely to be physically realizable in any meaningful fashion. You mention in the paper that we know that BQP/qpoly is contained in QMA/poly intersect coQMA/poly which gives us that BQP/qpoly is not all. Given your comment 17 above, I’m now wondering if I should be surprised that BQP/qpoly isn’t ALL.
Is there any reasonably natural class between PDQP and BQP where it is open whether giving it qpoly advice gives ALL? What happens if we look at variants of PDQP where we restrict how many times we may take a measurement without collapse, possibly as a slow growing function of the input?
56. Scott Says:
Sid #52: No, I hadn’t come across it.
57. Scott Says:
RandomOracle #53: Well, of course there’s the BBBV lower bound and the collision lower bound, which apply whenever you assume there’ so little usable structure in your cryptosystem that a quantum algorithm might as well either brute-force search or search for collisions, respectively. If you assume more structure than that, then I’m not aware of quantum black-box lower bounds with relevance to cryptographic primitives like LWE—but maybe someone else is (or maybe I’m forgetting something obvious)? There was the seminal work of Oded Regev around 2005, which related the post-quantum security of LWE and lattice-based cryptography to the Hidden Subgroup Problem over the dihedral group and the average-case subset sum problem, but that work wasn’t in the black-box setting. (Indeed, in the black-box setting, the Hidden Subgroup Problem is quantumly easy, by the 1997 result of Ettinger, Hoyer, and Knill.)
58. Scott Says:
Joshua and Eve #55: Those are great questions, and you can tell I think so because they’re both explicitly discussed in the paper! 😀
A good class intermediate between BQP and PDQP is QCSZK (i.e., Statistical Zero Knowledge with quantum verifier and classical interaction with the prover). I left as my main open problem whether QCSZK/qpoly = ALL.
I also discussed what happens when only a small number of non-collapsing measurements are allowed. Briefly: if there exist good enough 3-query Locally Decodable Codes (LDCs), then it’s possible to do ALL using only 3 measurements (let’s say, 2 non-collapsing and then a final one that’s collapsing without loss of generality). Even using the 3-query LDCs that are already known, it should be possible to do this using a subexponential-size advice state and/or subexponential time, but I didn’t work out all the details (just gave the relevant references in the paper).
I don’t know whether it should be possible to do ALL with only two measurements, and subexponential time and a subexponential-size state. A central obstacle here is the 2003 no-go theorem of Kerenidis and de Wolf, which says that there are no 2-query LDCs of subexponential size. So, that rules out the approach taken in my paper, but it leaves open the possibility of some other approach. This is probably related to the earlier question of whether QCSZK/qpoly = ALL.
59. aa Says:
P=BPP implies rpoly=poly and so PP/rpoly=PP/poly?
60. Scott Says:
aa #69: No. P=BPP doesn’t mean that randomness can be eliminated in every situation, and indeed, PP/poly vs. PP/rpoly is one situation where we know for certain that it can’t be.
61. aa Says:
What exactly is rpoly?
poly is poly size advice string.
rpoly is poly size advice string (with random pepper and salt).
Both look edible to a hungry person.
62. Scott Says:
aa #61: By academic standards, I spend a ridiculous amount of time answering questions on my blog. But it’s all worth it whenever I see my replies met by that level of comprehension and engagement.
63. Joshua Zelinsky Says:
And this shows I should finish reading papers before asking questions about them. (Also that comment labeled Joshua and Eve was from me; Google autocomplete does seem weird things sometime.)
64. Petter Says:
I am certainly not an expert on these things, but I like reading your blog. But I have a question about this paper.
Theorem 2 gives a method to compute any f(x). But it seems to require an advice state consisting of g(x) for all x. Evaluating g(x) seems as hard as evaluating f(x), so by the time the advice state is created, all work seems to be done?
65. Scott Says:
Petter #64: As discussed in earlier comments, that’s precisely the idea with advice—that you don’t care how long it takes to prepare it, it’s just a resource that’s given to you. The restrictions, with advice, are
(1) you get the same advice state for every input of length n, and
(2) the advice is limited in size (typically, to poly(n) bits or qubits).
With those constraints, you’re often severely limited in what you can compute, even supposing you had all the time in the world to prepare the advice state, because of a “communication bottleneck” issue. That’s what happens, for example, with BQP/qpoly and QMA/qpoly. The point of the new paper was to show that, for slightly non-obvious reasons, it’s not what happens with PDQP/qpoly.
66. RandomOracle Says:
Scott #57: Thanks!
Another, unrelated, question: given 2 efficiently computable functions f,g : {0,1}^n -> {-1, +1}, is computing their forrelator (sum_{x,y} f(x) g(y) (-1)^{x.y}) GapP-complete?
67. Scott Says:
RandomOracle #66: Yes. Indeed, it’s GapP-complete even in the special case that f is the constant all-1 function, in which case the sum reduces to 2n Σy g(y).
68. RandomOracle Says:
Scott #67: I’m a bit confused. When f is the constant all-1 function, the sum reduces to sum_{x,y} (-1)^{x.y} g(y). Why would this be 2^n sum_y g(y)? (it seems to me like that would be the case if (-1)^{x.y} weren’t there)
If we take x and y to be one bit, and g(0) = -1, g(1) = 1, then sum_y g(y) = 0. However, sum_{x,y} (-1)^{x.y} g(y) = -2.
Am I missing something?
69. Scott Says:
RandomOracle #68: If f is the all-1 function, then for every nonzero value of x, you get complete cancellation, with (-1)x.y positive and negative equally often as you range over the y’s. So then all that’s left is x=0, where you get Σy g(y). Sorry, I was mistaken about the 2n factor.
70. RandomOracle Says:
Scott #69: But shouldn’t you sum over x to get the cancellation? It’s true that the x = 0 case is sum_y g(y), but for some non-zero x it’s not necessarily the case that sum_y g(y) (-1)^{x.y} is 0.
So shouldn’t it be:
sum_{x,y} g(y) (-1)^{x.y} = sum_y g(y) (sum_x (-1)^{x.y})
For all non-zero y’s, we have that sum_x (-1)^{x.y} = 0.
So then the sum becomes g(0) * 2^n. This is also what I’ve been getting with the examples I’ve tried.
71. Scott Says:
RandomOracle #70: Err, yes, you’re right. I apologize for denseness.
Just for you, here’s a (hopefully) corrected GapP-hardness proof. Set f(0n)=-1, and f(x)=1 for all nonzero x. Then we can calculate:
sumx,y f(x) (-1)x.y g(y)
= 2n g(0n) – 2Σy g(y).
So the value of the forrelation encodes the sum of exponentially many g-values, as desired.
72. RandomOracle Says:
Scott #71: Ah, right… so essentially in sum_x f(x) (-1)^{x.y}, for non-zero y, you flip one of the (-1)’s, but the rest stay the same and so the sum is -2. Cool! Thanks 😀
73. aa Says:
Ok I don’t understand why randomized advice could be different from advice when both are precomputed essentially.
74. Scott Says:
aa #73: It’s because an n-bit string takes n bits to specify, whereas a probability distribution over n-bit strings takes exp(n) bits to specify (even approximately). The difficulty is reading out a desired bit from among those exp(n)—but some complexity classes (like PostBPP, PP, IP(2), and PDQP) can do it. Perhaps the easiest to understand is PP: with that class, you only need to guess the right answer with some probability greater than 1/2, so you can just take a sample from the advice distribution and hope that it happens to tell you the answer for your particular input x.
75. Job Says:
If non-collapsing measurements boost Grover’s lower bound to N^1/4, then I wonder if you could pair non-collapsing measurements with quantum-simulation optimizations to actually improve performance for some problems.
That seems like a long shot since the quantum simulator’s performance depends on the total number of bits (including ancilla and output) rather than just the input bits…
It’s like the classical simulator has the advantage of a shorter circuit depth (N^1/4 vs N^1/2) at the cost of a larger circuit width (plus all the amplitude extravaganza).
76. Scott Says:
Job #75: A detail, but we only know how to do Grover search in ~N1/3 time in the PDQP model.
~N1/4 is our best current lower bound, but my guess is that ~N1/3 is actually the tight answer.
I seriously doubt that this observation about PDQP could lead to better classical algorithms to simulate quantum circuits. Now that you mention it, though, what the observation might be useful for is the converse: putting limits on how much advantage classical simulation algorithms could get over brute force for various tasks, if they’re not going to be solving CircuitSAT or whatever in sub-2n time.
77. Nick Says:
Scott #40:
is regularly considered in computability theory,
Uhh, what?
78. Scott Says:
Nick #77: Just build a “Zeno computer,” which simulates the first step of a Turing machine in 1 second, the second step in 1/2 second, the third in 1/4 second, and so on infinitely, raising a flag if the machine ever halts within the 2-second time limit. Sure, this would require an unbounded amount of energy. But in a hypothetical world with no Planck scale, no quantum gravity, and no risk of anything collapsing to a black hole because you tried to stuff too much or do too much in too small a region of spacetime, what would there be in fundamental physics to rule this out? (“Mere engineering difficulties” are trivial to list and don’t count. 🙂 )
79. Job Says:
I guess a typical simulator would also be able to support post-selection?
An ideal simulator would exactly match the power of a QC and nothing more.
It’s not efficient to simulate a QC using a PDQP or PostBQP machine.
That’s like using NP to simulate BPP.
80. Sniffnoy Says:
BTW, I guess I don’t get any credit for my repeatedly bugging you about PDPP? 😛
81. Scott Says:
Sniffnoy #80: I apologize for having totally forgotten about your comments—though of course, I can’t prove that there was no subconscious influence on me.
But the other issue is: should I actually acknowledge you in my paper as “Sniffnoy”? 😀 I’ll do it if I can use your real name.
82. marc Says:
9 pages including references, not 8. 🙂
83. Scott Says:
marc #82: Well, it was 8 when I put the post up. As mentioned in the update, I since added some material.
84. Mateus Araújo Says:
I wonder if you get the same complexity class as PDQP if instead of non-collapsing measurements you do regular measurements with the modified Born rule proposed by Galley and Masanes in arXiv:1610.04859, where \(p(a|\psi) = \tr(E_a |\psi\rangle\langle\psi|^{\otimes N} \) for some effects \(E_a\) that make the probabilities normalised and non-negative.
It seems to work for your example of finding collisions.
85. jonas Says:
Job #79: no, definitely not post-selection. Post-selection is too strong, even without quantum madness, because it lets you go from random algorithms to nondeterministic algorithms.
86. Sniffnoy Says:
Scott: Yes of course you can use my real name. It’s not like I really make much of an effort to hide. 🙂 But yes definitely glad to see PDPP appear “for real” (it’s not like I was going to write a paper on or even mentioning it anytime soon!), so thanks for that! 🙂
87. Job Says:
Jonas #85, Scott says:
Postselection is the power of discarding all runs of a computation in which a given event does not occur.
PostBQP in particular is “post-selecting on a measurement yielding a specific outcome”.
If a quantum simulator has access to all amplitudes, at measurement time, why can’t it evaluate to one of the post-selected outcomes? Isn’t that the same as post-selection?
88. Sniffnoy Says:
Also thank you Scott for the acknowledgement. 🙂
Leave a Reply |
ad92f26f60e69733 | Can quantum computing be simulated by an optical network? | PhysicsOverflow
• Register
Please help promote PhysicsOverflow ads elsewhere if you like it.
New printer friendly PO pages!
Migration to Bielefeld University was successful!
Please vote for this year's PhysicsOverflow ads!
... see more
Tools for paper authors
Submit paper
Claim Paper Authorship
Tools for SE users
Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post
Public \(\beta\) tools
Report a bug with a feature
Request a new functionality
404 page design
Send feedback
(propose a free ad)
Site Statistics
160 submissions , 132 unreviewed
4,164 questions , 1,543 unanswered
5,016 answers , 21,274 comments
1,470 users with positive rep
587 active unimported users
More ...
Can quantum computing be simulated by an optical network?
+ 5 like - 0 dislike
On page 43 of http://www.mat.univie.ac.at/~neum/ms/optslides.pdf, we read
8. Simulating quantum mechanics
The simulation of quantum computing by classical fields is essentially achieved by using an optical network in which each quantum level is modelled by a corresponding mode of the electromagnetic field.
The linearity of the Maxwell equations then directly translates into the superposition principle for pure quantum states.
Thus it is possible to simulate arbitrary quantum systems which have a finite number of levels by the Maxwell equations, and hence by a classical model.
Therefore we shall look a little more closely into the reasons for this ability to simulate quantum systems.
...some explanations about second order coherence theory of the Maxwell equations...
As a consequence, it is possible (at least in principle) to simulate with classical electromagnetic waves and suitable classical linear optical networks any quantum system that can be embedded into the single photon quantum system.
Since all Hilbert spaces arising in applications of quantum physics are separable, they have a countable basis, and can be embedded into the single photon quantum system, at least in principle.
Thus it appears that, all quantum systems can be simulated by classical electromagnetic waves!
Of course, a practical realization may be difficult.
How is this optical network supposed to work? Does it really work? I don't see how the fact that all separable Hilbert spaces are isomorphic to each other would allow me to conclude that the time evolution of a single photon by the Schrödinger equation will be able to simulate the time evolution of an arbirary quantum system by the Schrödinger equation. But "optical network" sounds very concrete to me, so maybe some simple examples of how to simulate the time evolution of a two photon quantum systems by them could be help me?
asked Dec 16, 2014 in Theoretical Physics by Thomas Klimpel (70 points) [ no revision ]
I suppose @ArnoldNeumaier (who wrote the lecture notes) could answer this question?
In http://www.mat.univie.ac.at/~neum/ms/hidden.pdf @ArnoldNeumaier writes: "With more beam splitters, through which several narrowly spaced beams are passed, one can produce a cascade of more complex tensor product states. Indeed, Reck et al. [23] showed that (i) any quantum system with only finitely many degrees of freedom can be simulated by a collection of spatially entangled beams; (ii) in the simulated system, there is for any Hermitian operator H an experiment measuring H; (iii) for every unitary operator S, there is an optical arrangement in the simulated system realizing this transformation, assuming lossless beam splitters."
Neither that article, nor its reference [23] can be found in the reference section of the presentation which is quoted in the question. So the answer to the question seems to be "yes!", but one would have to read (and understand) "some" references proving this to be sure.
Your answer
Live preview (may slow down editor) Preview
Your name to display (optional):
Anti-spam verification:
user contributions licensed under cc by-sa 3.0 with attribution required
Your rights |
69473e27c11d560f | Mathematical Equations That Remarkably Impacted The World
Calculation, equation, and, math is continuously revolutionizing our world. From the time mankind wanted to calculate the field area for growing crops – there was a thirst to know and understanding the secrets of the world. Why apple always fall down rather than flying, is there a pattern to the movement of star, what can assist in navigation, and why birds fly while we cannot – these questions of the curious minds lead to the thirst of known and the answer provided the mean to modernize the world one invention at a time!
1- Calculus
Due to the applicability of calculus, it is not only used in mathematics but in engineering biology, physics, chemistry, and many more branches of science. Calculus can help you in the determination of weather pattern movement of sound, movement of light, and motion of astronomical objects.
Euler’s Polyhedra Formula
Fourier Transform
2- Law of Gravity
Gravity is an undeniable force responsible for the existent of our planet. The law of gravity helps in the evaluation of weight and speed leading to significant modernizations including race car amd airplanes.
3- Logarithms
There are many example of the use of logarithms in the real world starting from interest rate to Google page rankings. Logarithms are also used to detect changes in multiplication and help count them.
4- Maxwell’s Equations
It is the set of 4 differential equations that describe the relation between electricity and magnetism. These equations are the basis for understanding the behavior of electromagnetism. From MRI scanners in the hospital to computer – the credit goes to the basic understanding of Maxwell’s equations.
5- Navier-Stokes Equations
These differential equation helped us understand the behaviors of flowing liquids such as smoke rising from cigarette, water moving through pipes, and air flow over plane wings. Navier-Stokes Equations are also used to model the weather and observe ocean currents.
6- Normal Distribution
Normal probability also known as normal distribution forms a bell curve and it is signifint in statistics. It is used in social sciences, physics, and biology to define the behavior of large groups of independent processes. Normal distribution is followed in the measurement of errors, heinght, IQ score, and blood pressure.
7- Quadratic Equation
There are various functions that are modeled by quadratic equation including shooting a cannon, hitting golf ball, and diving. You can calculate the expected profit you are going to get if you are using the quadratic equation. It can prevent unwanted surprises and provide you with the accurate numbers and what to expect in the future. Even in the business where you are simply selling bottled water, it can help you estimate how many bottles you have to sell to generate the profit you want.
8- Relativity
Relativity opened the door to understanding – be it our understanding of the outer space or the speed of light. It provided us with the idea that light speed is universal but the time factor is different for the speed of people or objects. Relativity helped us understand the fate, structure, and the origin of the universe.
9- Schrodinger’s Equation
The behavior of atomic and subatomic particle is defined by Schrodinger’s Equation. It enhance the understanding of quantum physics hence played a huge role in the development of computing devices. Computational chemistry is the direct application of Schrödinger equation and it is currently being used in medication and engineered food.
10- Second Law of Thermodynamics
According to the second law of thermodynamics, heat flow from hot to cold environment due to the change in temperature. This is the concept used in the working of internal combustion engines used in airplane, ship, car, and motorcycles. The law is applicable to all engine cycles and led to the progress of modern vehicles.
11- The Pythagorean Theorem
Whenever you need to find out if a triangle in acute, right-angled, or obtuse – you can use Pythagoras theorem for that. It made the life of mathematicians easier as it help them to find the missing length of any side of a triangle.
12- The square root of -1
The square root of -1 = I, this process gave rise to complex numbers that are supremely elegant. In case an equation have complex number solution, it will represented by ‘I’. With the help of this equation, mathematicians were able to find symmetries and the properties of the number which are implemented in signal processing and electronics.
13- Wave Equation
Wave equation as the name indicates describe the behavior of waves along with ripples, guitar strings, and incandescent bulb light. It is one of the first differential equations that helped us understand other differential equations as well.
The world of mathematics is abundant with equations that helped us revolutionize the world as we know it today. We were not only able to understand the concept behind natural phenomenon ut also manipulate them for the modern advancement. These were just the few examples, stay tuned to know more! |
8364ff914e9e5ec3 |
Numerical Study of a Lyapunov functional for the Complex Ginzburg-Landau Equation
R. Montagne 1, E. Hernández-García, and M. San Miguel Departament de Física, Universitat de les Illes Balears,
and Institut Mediterrani d’Estudis Avançats, IMEDEA (CSIC-UIB)
E-07071 Palma de Mallorca (Spain)
11on leave from Universidad de la República (Uruguay).
July 2, 2020
We numerically study in the one-dimensional case the validity of the functional calculated by Graham and coworkers (R. Graham and T. Tel, Phys. Rev. A 42, 4661 (1990), O. Descalzi and R. Graham, Z. Phys. B 93, 509 (1994)) as a Lyapunov potential for the Complex Ginzburg-Landau equation. In non-chaotic regions of parameter space the functional decreases monotonically in time towards the plane wave attractors, as expected for a Lyapunov functional, provided that no phase singularities are encountered. In the phase turbulence region the potential relaxes towards a value characteristic of the phase turbulent attractor, and the dynamics there approximately preserves a constant value. There are however very small but systematic deviations from the theoretical predictions, that increase when going deeper in the phase turbulence region. In more disordered chaotic regimes characterized by the presence of phase singularities the functional is ill-defined and then not a correct Lyapunov potential.
Keywords: Complex Ginzburg-Landau Equation, Nonequilibrium Potential, Lyapunov Potential, Spatio-Temporal Chaos
PACS: 05.45.+b,05.70.Ln
I Introduction
The Complex Ginzburg-Landau Equation (CGLE) is the amplitude equation describing universal features of the dynamics of extended systems near a Hopf bifurcation [1, 2].
Examples of this situation include binary fluid convection [3], transversally extended lasers [4] and chemical turbulence[5]. We will considered here only the one-dimensional case, , with . Suitable scaling of the complex amplitude , space, and time shows that for fixed sign of there are only three independent parameters in (1) (with and that we assume henceforth). They can be chosen to be , , and .
The CGLE for displays a rich variety of complex spatio-temporal dynamical regimes that have been recently classified in a phase diagram in the parameter space [6, 7, 8]. It is commonly stated that such nontrivial dynamical behavior, occurring also in other nonequilibrium systems, originates from the non-potential or non-variational character of the dynamics [9]. This general statement needs to be qualified because it involves some confusion in the terminology. For example the term “non-variational” is often used meaning that there is no Lyapunov functional for the dynamics. But Graham and co-workers, in a series of papers [10, 11, 12, 13, 14], have shown that a Lyapunov functional does exist for the CGLE, and they have constructed it approximately in a small-gradient approximation. The correct statement for the CGLE is that it is not a gradient flow. This means that there is no real functional of from which the right hand side of (1) could be obtained by functional derivation.
Part of the confusion associated with the qualification of “nonvariational” dynamics comes from the idea that the dynamics of systems having non-trivial attractors, such as limit cycles or strange chaotic attractors, can not be deduced from the minimization of a potential which plays the role of the free energy of equilibrium systems. However, such idea does not preclude the existence of a Lyapunov functional for the dynamics. The Lyapunov functional can have local minima which identify the attractors. Once the system has reached an attractor which is not a fixed point, dynamics can proceed on the attractor due to “nonvariational” contributions to the dynamical flow which do not change the value of the Lyapunov functional. This just means that the dynamical flow is not entirely determined once the Lyapunov functional is known. This situation is very common and well known in the study of dynamical properties within the framework of conventional statistical mechanics: The equilibrium free energy of the system is a Lyapunov functional for the dynamics, but equilibrium critical dynamics [15] usually involves contributions, such as mode-mode coupling terms, which are not determined just by the free energy. The fact that the dynamical evolution is not simply given by the minimization of the free energy is also true when studying the nonequilibrium dynamics of a phase transition in which the system evolves between an initial and a final equilibrium state after, for example, a jump in temperature across the critical point [16].
A Lyapunov functional plays the role of a potential which is useful in characterizing global properties of the dynamics, such as attractors, relative or nonlinear stability of these attractors, etc. In fact, finding such potentials is one of the long-sought goals of nonequilibrium physics [17, 18], the hope being that they should be instrumental in the characterization of nonequilibrium phenomena through phase transitions analogies. The use of powerful and very general methods based on these analogies has been advocated by a number of authors [19, 20, 6, 7, 8]. In this context, it is a little surprising that the finding of a Lyapunov functional for the CGLE [12, 13, 14] has not received much attention in the literature. A possible reason for this is that the construction of nonequilibrium potentials has been historically associated with the study of stochastic processes, in particular in the search of stationary probability distributions for systems driven by random noise [17, 18, 21]. We want to make clear that the finding of the Lyapunov functional for the CGLE [12, 13, 14], as well as the whole approach and discussion the present paper is completely within a purely deterministic framework and it does not rely on any noise considerations. A second possible reason for the relative little attention paid to the Lyapunov functional for the CGLE is the lack of any numerical check of the uncontrolled approximations made on its derivation. The main purpose of this paper is precisely to report such numerical check of the results of Graham and collaborators, thus delimiting the range of validity of the approximations involved. We also provide a characterization of the time evolution of the Lyapunov functional in different regions of the phase diagram of the CGLE [6, 7, 8], which illustrates the use of such potential.
Our main findings are that the expressions by Graham and coworkers behave to a good approximation as a proper Lyapunov potential when phase singularities (vanishing of the modulus of ) are not present. This includes non-chaotic regimes as well as states of phase turbulence. In this last case some small but systematic discrepancies with the predictions are found. In the presence of phase singularities the potential is ill-defined and then it is not a correct Lyapunov functional.
The paper is organized as follows. For pedagogical purposes, we first discuss in Sect. II a classification of dynamical flows in which notions like relaxational or potential flows are considered. The idea of a potential for the CGLE is clearer in this context. In Sect. III we review basic phenomenology of the CGLE and the main analytical results for the Lyapunov functional of the CGLE. Sections IV and V contain our numerical analyses. Section IV is devoted to the Benjamin-Feir stable regime of the CGLE and Sect. V to the Phase Turbulent regime. Our main conclusions are summarized in Sect.VI.
Ii A classification of dynamical flows
In the following we review a classification of dynamical systems that, although rather well established in other contexts [17, 18], it is often overlooked in general discussions of deterministic spatio-temporal dynamics. Non-potential dynamical systems are often defined as those for which there is no Lyapunov potential. Unfortunately, this definition is also applied to cases in which there is no known Lyapunov potential. To be more precise, let us consider dynamical systems of the general form
where represents a set of, generally complex, dynamical variables which are spatially dependent fields: . is a functional of them. The notation represents the complex conjugate of and for simplicity we will keep the index implicit. Let us now split into two contributions:
where , the relaxational part, will have the form
with a real and scalar functional of . is an arbitrary hermitic and positive-definite operator (possibly depending on ). In the particular case of real variables there is no need of taking the complex conjugate, and hermitic operators reduce to symmetric ones. The functional in (3) is the remaining part of . The important point is that, if the splitting (3) can be done in such a way that the following orthogonality condition is satisfied (c.c. denotes the complex conjugate expression):
then the terms in neither increase nor decrease the value of , which due to the terms in becomes a decreasing function of time:
If is bounded from below then it is a Lyapunov potential for the dynamics (2). Equation (7) with , that is
can be interpreted as an equation for the Lyapunov potential associated to a given dynamical system (2). It has a Hamilton-Jacobi structure. When dealing with systems perturbed by random noise, is fixed by statistical requirements, but in deterministic contexts such as the present paper, it can be arbitrarily chosen in order to simplify (7).
Solving (7) is in general a difficult task, but a number of non-trivial examples of the splitting (3)-(6) exist in the literature. Some of these examples correspond to solutions of (7) found in the search of potentials for dynamical systems [12, 10, 11]. Other examples just correspond to a natural splitting of dissipative and non-dissipative contributions in the dynamics of systems with well established equilibrium thermodynamics, as for example models of critical dynamics [15] or the equations of nematodynamics in liquid crystals [22].
Once the notation above has been set-up, we can call relaxational systems those for which there is a solution of (7) such that , that is all the terms in contribute to decrease . Potential systems can be defined as those for which there is a nontrivial (i.e. a non-constant) solution to (7). In relaxational systems there is no long-time dynamics, since there is no time evolution of once a minimum of is reached. On the contrary, for potential systems for which , the minima of define the attractors of the dynamical flow, but once one of these attractors is reached, nontrivial sustained dynamics might exist on the attractor. Such dynamics is determined by and maintains a constant value for the functional .
A possible more detailed classification of the dynamical flows is the following:
• Relaxational gradient flows: Those dynamical systems for which with proportional to the identity operator. In this case the time evolution of the system follows the lines of steepest descent of . A well known example is the so called Fisher-Kolmogorov equation, also known as model A of critical dynamics [15], or (real) Ginzburg-Landau equation for a real field :
where , and are real coefficients. This equation is of the form of Eqs. (2)-(4) with , , and , the Ginzburg-Landau free energy:
• Relaxational non-gradient flows: Still but with not proportional to the identity, so that the relaxation to the minimum of does not follow the lines of steepest descent of . The matrix operator might depend on or involve spatial derivatives. A well known example of this type is the Cahn-Hilliard equation of spinodal decomposition, or model B of critical dynamics for a real variable . [15]:
The symmetric and positive-definite operator has its origin in a conservation law for .
• Non-relaxational potential flows: does not vanish, but the potential , solution of (7) exists and is non-trivial. Most models used in equilibrium critical dynamics [15] include non-relaxational contributions, and therefore belong to this category. A particularly simple example is
where now is a complex field. Notice that we can not interpret this equation as being of type 1, because is not a hermitic operator, but still is a Lyapunov functional for the dynamics. Equation (11) is a special case of the Complex Ginzburg- Landau Equation (CGLE), in which is the sum of a relaxational gradient flow and a nonlinear-Schrödinger-type term .
The general CGLE[2] is of the form (8) but is complex and , and are arbitrary complex numbers. For the special case in which , as for example in (11), the Lyapunov functional for the CGLE is known exactly [23]. Such choice of parameters has important dynamical consequences[24]. Beyond such special cases, the calculations by Graham and coworkers indicate [13, 14] that the CGLE, a paradigm of complex spatio-temporal dynamics, might be classified within this class of non-relaxational potential flows because a solution of (7) is found. The difficulty is that the explicit form of the potential is, so far, only known as a uncontrolled small-gradient expansion.
• Non-potential flows: Those for which the only solutions of (7) are the trivial ones (that is constant). Hamiltonian systems as for example the nonlinear Schrödinger equation are of this type.
Iii A Lyapunov Functional for the Cgle
It is well known that for the one dimensional CGLE (1) has as a stable solution, whereas for there are Travelling Wave (TW) solutions of the form
with , , and . We have introduced
is any arbitrary constant phase.
The linear stability of the homegeneus solution ( (12) with ) with respect to long wavelength fluctuations divides the parameter space in two regions: the Benjamin-Feir (BF) stable and the BF unstable zone. This line is given by [25, 26]
In the BF unstable region () there are no stable TW solutions, while in the BF stable region () TW’s with a wavenumber are linearly stable. For , TW’s become unstable through the long wavelength instability known as the Eckhaus instability [27, 28]. The Eckhaus wavenumber is given by
Recent numerical work for and large [6, 7, 8, 29] has identified regions of the parameter space displaying different kinds of regular and spatio-temporal chaotic behavior (obtained at long times from random initial conditions and periodic boundary conditions), leading to a “phase diagram” for the CGLE. The five different regions, each leading to a different asymptotic phase, are shown in Fig. 1 as a function of the parameters and (, large). Two of these regions are in the BF stable zone and the other three in the BF unstable one. One of the main distinctions between the diferent asymptotic phases is in the behavior of the modulus of at long times. In some regions it never vanishes, whereas in others it vanishes from time to time at different points. A more detailed description of the asymptotic behavior in the different regions is as follows:
1. Non-Chaotic region. The evolution here ends in one of the Eckhaus-stable TW solutions for almost all the initial conditions.
2. Spatio-Temporal Intermittency region. Despite the fact that there exist stable TW, the evolution from random initial conditions is not attracted by them but by a chaotic attractor in which typical configurations of the field consist of patches of TW interrupted by turbulent bursts. The modulus of in such bursts typically touches zero quite often.
3. Defect Turbulence. This is a strongly disordered phase in which the modulus of has a finite density of space-time zeros. In addition the space and time correlation functions have a quasi-exponential decay [6, 7].
4. Phase Turbulence. This is a weakly disordered phase in which remains away from zero. The temporal correlations decay slower than exponentially [6, 7].
5. Bi-Chaos region. Depending on the particular initial condition, the system ends on attractors similar to the ones in regions 3, 4, or in a new attractor in which the configurations of consists of patches of phase and defect turbulence.
An approximate Lyapunov functional for the CGLE was calculated by Graham and collaborators [13, 14, 30]. Earlier attempts to find a Lyapunov functional were based on polynomial expansions[23, 31, 32, 33], while more recent and successful approaches focussed in solving the Hamilton-Jacobi equation (7) with in different ways. This was done first by a minimization procedure involving an action integral[10, 11, 12], and more recently by a more direct expansion method [13, 14, 30]. This last method provides also expressions in higher dimensions, but we will restrict here to the one-dimensional case. In any case, the solution involves an uncontrolled gradient expansion around space-independent solutions of the CGLE. Such expansion obviously limits the validity of the result to regions in the phase diagram in which there are not strong gradients. Since the expansion was actually performed in polar coordinates, this excludes the regions in which zeros in the modulus of are typical, since the phase of becomes singular there. In particular Spatio-temporal intermittency regimes, Bi-chaos and Defect Turbulence are out of the range of validity of Graham’s expansion. The meaningfulness of the potential in the other regions of parameter space remains still an open question because of the uncontrolled small gradient approximations used to calculate it, and calls for some numerical check.
In their solution of the Hamilton Jacobi equation, Graham and collaborators find different branches of the Lyapunov functional with expressions valid for different values of the parameters. In particular they identify the BF line (14) as separating two branches of the solution to (7).
The explicit expressions (obtained with ) are given in polar coordinates:
In terms of the amplitude , the phase , and their spatial derivates (denoted as , ,, etc.) the Lyapunov functional per unit of length was found[13, 14], for :
We note that even in this relatively simple case , the result for is only approximate and its structure reveals a highly non-trivial dynamics.
For , in the BF stable region () the expression for results:
Clearly, is ill-defined when .
By writing-out the Euler-Lagrange equations associated to the minimization of the TW solutions (12) are identified as local extrema of . Since they occur in families parametrized by the arbitrary phase , the minima associated to the TW of a given are not isolated points but lay on a one-dimensional closed manifold. The non-variational part of the dynamics ( in (3)) can be explicitly written-down by substracting with to the right-hand-side of (1). It is seen to produce, when evaluated on the manifold of minima of with a given , constant motion along it. This produces the periodic time dependence in (12) and identify the TW attractors as limit cycles.
The value of for which the corresponding extrema change character from local minima to saddle points is precisely the Eckhaus wavenumber . It is remarkable that, although expression (18) was obtained in a gradient expansion around the homogeneous TW, their minima identify exactly all the TW’s of equation (1), and their frequencies and points of instability are also exactly reproduced. This gives confidence on the validity of Graham’s approximations. It should be stressed however that they are not exact and can lead to unphysical consequences. For instance, the value of the potential evaluated on a TW of wavenumber () is [12]
where . For a range of parameter values this expression gives mathematical sense to the intuitive fact that the closer to zero is the more stable is the associated TW (because its potential is lower). But for some parameter values the minimal potential corresponds to large wavenumbers close to . This is counterintuitive and calls for some numerical test. The test will be described below and it will be shown that the wavenumbers close to are out of the range of validity of the small gradient approximations leading to (18).
We already mentioned in the previous section that the Lyapunov functional for the CGLE is exactly known for special values of the parameters [12, 13, 24]. This happens for , which lies in the BF-stable region as indicated in Fig. 1 . In this case it is clear that (1) can be written as
where is (9) for complex and with , , and . It is readily shown that the term proportional to is orthogonal to the gradient part, so that is an exact solution of (7) for these values of the parameters, and (21) is a relaxational non-gradient flow (see classification in section II). It is seen that the approximate expressions (17) and (18) greatly simplify when leading both to the same expression:
When expressed in terms of and it reproduces in (21). Thus the gradient expansion turns out to be exact on the line .
In the Benjamin-Feir unstable region () the gradient expansion for becomes[14, 30]:
where, in addition to the previous definitions
It was noted before that this expression can be adequate, at most, for the Phase Turbulent regime, since in the other BF unstable regimes vanishes at some points and instants, so that (23) is ill-defined.
The long time dynamics occurs in the attractor defined by the minima of . The Euler-Lagrange equations associated to the minimization of (23) lead to a relationship between amplitude and phase of which implies the well known adiabatic following of the amplitude to the phase dynamics commonly used to describe the phase turbulence regime by a nonlinear phase equation. The explicit form of this relationship is
It defines the attractor characterizing the phase turbulent regime. Dynamics in this attractor follows from the nonrelaxational part in (3). When (25) is imposed in such nonrelaxational part of the dynamics the generalized Kuramoto-Shivashinsky equation containing terms up to fourth order in the gradients [34] is obtained [14, 30].
We finally note that in the phase turbulent regime the Lyapunov functional gives the same value [14, 30] when evaluated for any configuration satisfying (25), at least within the small gradient approximation. This corresponds to the evolution on a chaotic attractor (associated to the Kuramoto-Sivashinsky dynamics coming from ) which is itself embedded in a region of constant (the potential plateau [18]). This plateau consists of the functional minima of (25). All the (unstable) TW are also contained in the same plateau, since they satisfy (25).
Iv Numerical Studies of the Lyapunov Functional in the Benjamin-Feir Stable Regime
We numerically investigate the validity of in (17), (18), and (23) as an approximate Lyapunov functional for the CGLE. When evaluated on solutions of (1) it should behave as a monotonously decreasing function of time, until reaches the asymptotic attractor. After then, should maintain in time a constant value characteristic of the particular attractor.
All the results reported here were obtained using a pseudo-spectral code with periodic boundary conditions and second-order accuracy in time. Spatial resolution was typically 512 modes, with runs of up to 4096 modes to confirm the results. Time step was typically except when differently stated in the figure captions. Since very small effects have been explored, care has been taken of confirming the invariance of the results with decreasing time step and increasing number of modes. System size was always taken as , and always and , so that and . When a random noise of amplitude is said to be used as or added to an initial condition it means that a set of uncorrelated Gaussian numbers of zero mean and variance was generated, one for each collocation point in the numerical lattice.
iv.1 Negative
The uniform state is stable for . We start our numerical simulation with a plane wave of arbitrary wavenumber and arbitrary amplitude (note that the TW’s (12) do not exist for ), and calculate for the evolving configurations. In order to have relevant nonlinear effects during the relaxation towards we have chosen a small value for the coefficient of the linear term (). The remaining parameters were and (, ). Despite the presence of non-relaxational terms in (1), decreases monotonously (see Fig. 2) to the final value confirming its adequacy as a Lyapunov potential.
iv.2 Positive . Benjamin-Feir stable regime
We take in this section always . Non-chaotic (TW) states and Spatio-Temporal Intermittency are the two phases found below the BF line in Fig. 1. We first perform several numerical experiments in the non-chaotic region:
A first important case is the one on the line , for which (22) is an exact Lyapunov functional . We take and (), on the line, and compute the evolution of along a solution of (1), taking as initial condition for a Gaussian noise of amplitude . Despite of the strong phase gradients present specially in the initial stages of the evolution, and of the presence of non-relaxational terms, decays monotonously in time (Fig. 3). The system evolved towards a TW attractor of wavenumber . The value of in such state is, from Eq. (20), . It is important to notice that our numerical solution for and numerical evaluation of the derivatives in reproduce this value within a in the last time showed in Fig. reffig3, and continues to approach the theoretical value for the asymptotic attractor at longer times222If a smaller time step is used greater accuracy is obtained. For example, if the time step is reduced to the value of is reproduced within . But this takes quite a long computing time..
We continue testing the Lyapunov functional for , ( ,. This is still in the non-chaotic region but, since , is not expected to be exact, but only a small gradient approximation. We check now the relaxation back to an stable state after a small perturbation. As initial condition we slightly perturb a TW of Eckhaus-stable wavenumber () by adding random noise of amplitude . decays monotonously (Fig. 4) from its perturbed value to the value as the perturbation is being washed out, as expected for a good Lyapunov functional.
A more demanding situation was investigated for and (again in the non-chaotic region, and , and ). Two TW of different wavenumbers (, both Eckhaus-stable) were joined and the resulting state (see inset in Fig. 5) was used as initial condition. The TW of smaller wavenumber advances into the other, in agreement with the idea that it is nonlinearly more stable since it gives a smaller value to the potential. As the difference between the two frequencies is large the speed at which one wave advances onto the other is quite large. The interface between the two TW’s contains initially a discontinuity in the gradient of the phase which is washed out in a few integration steps. An important observation is that during the whole process the modulus of never vanishes and then the winding number, defined as
remains constant () (with periodic boundary conditions is constant except at the instants in which the phase becomes singular, that is when ). After the TW with the smallest wavenumber completely replaced the other, still a phase diffusion process in which the wave adjusts its local wavenumber to the global winding number occurs. The state (limit cycle) finally reached is a TW of . Despite of the complicated and non-relaxational processes occurring behaves as a good Lyapunov functional monotonously decreasing from the value corresponding to the two-wave configuration to the value of the final attractor (Fig. reffig5). It would be interesting, as happens in some relaxational models [35], finding some relationship between the speed of propagation of the more stable wave onto the less stable one and the difference in between the two states.
The good behavior of will be obviously lost if the field vanishes somewhere during the evolution. As the next numerical experiment (for and , that is , ) we used as initial condition a small () random Gaussian noise. The system was left to evolve towards its asymptotic state (a TW). Fig. 6 shows that after a transient monotonously decreases. During the initial transient it widely fluctuates, increasing and decreasing and loosing then its validity as a Lyapunov functional. This incorrect behavior occurs because during the initial stages is small and often vanishes, changing . When (and then ) vanishes the phase and (18) are ill-defined and out of the range of validity of a small gradient approximation. Note the contrast with the case in which the potential is exact and well behaved even when is strongly changing. The particular values of the maxima and minima during the transient in which is changing depend on the spatial and temporal discretization, since it is clear from (18) that is ill-defined or divergent when vanishes. Note that this incorrect behavior of for is not a problem for the existence of a Lyapunov functional, but comes rather from the limited validity of the hypothesis used for its approximate construction. Nevertheless, as soon as the strong gradients disappear relaxes monotonously to the value , corresponding to the final state, a TW of wavenumber .
As another test in the non-chaotic region, for and (, ) we use as initial condition an Eckhaus-unstable TW () slightly perturbed by noise. The system evolves to an Eckhaus-stable TW () by decreasing its winding number (initially and finally ). Fig. 7 shows the evolution of from its initial value the final one . Although there is a monotonously decreasing baseline, sharp peaks are observed corresponding to the vanishing of associated with the changes in . When finally stops changing, so that is close enough to the final TW, relaxes monotonously as in Fig. 4.
It was explained in Sect. III that there are parameter ranges in which is smaller near the boundaries for existence of TW, that is near , than for the homogeneous TW: . This happens for example for , (, ). The corresponding function is shown in Fig. 8. If this prediction is true, and if is a correct Lyapunov functional, evolution starting with one of these extreme and Eckhaus-unstable TW would not lead to any final TW, since this would increase the value of the Lyapunov functional. This would imply the existence for this value of the parameters of an attractor different from the TW’s perhaps related to the Spatio-Temporal Intermittency phenomenon. We use as initial condition at the parameter values of Fig. 8 an unstable TW of wavenumber (), slightly perturbed by noise. From Fig. 8, the system should evolve to a state with a value of value even lower than that. What really happens can be seen in Fig. 9. The system changes its winding number from the initial value , a process during which widely fluctuates and is not a correct Lyapunov functional, and ends-up in a state of , with a value of larger than the initial one. After this the system relaxes to the associated stable TW of . As clearly stated by Graham and coworkers, the expressions for the potential are only valid for small gradients. Since is a phase gradient, results such as Fig. 8 can only be trusted for small enough.
Finally, we show the behavior of in the Spatio-Temporal Intermittency regime. Since is constantly changing in this regime it is clear that (18) will not be a good Lyapunov functional and this simulation is included only for completeness. We take and (, ) and choose as initial condition a TW with (), with a small amount of noise added. The TW decreases its winding number and the system reaches soon the disordered regime called Spatio-Temporal Intermittency. Fig. 10 shows that the time evolution of is plagued with divergences, reflecting the fact that is constantly changing (see inset). It is interesting to observe however that during the initial escape from the unstable TW shows a decreasing tendency, and that its average value in the chaotic regime, excluding the divergences, seems smaller than the initial one.
V Numerical Studies of the Lyapunov Functional in the Phase Turbulence Regime
The Phase Turbulence regime is characterized by the absence of phase singularities (thus is constant). This distinguishes it as the only chaotic regime for which would be well-defined. Graham and co-workers[14, 30] derived especially for this region an expression proposed as Lyapunov functional in the small gradient approximation (23).
We recall that the calculations in [14, 30] predict that the phase turbulent attractor lies on a potential plateau, consisting of all the complex functions satisfying (25), in which all the unstable TW cycles are also embedded. The value of the potential on such plateau can be easily calculated by substituting in (23) an arbitrary TW, and the result is
We note that this value does not depend on nor and then it is independent of , the vertical position in the diagram of Fig. 1, within the phase turbulence region.
In this section we take also . We perform different simulations for and (, ). In the first one, we start the evolution with the homogeneous oscillation solution (TW of ). This solution is linearly unstable, but since no perturbation is added, the system does not escape from it. The potential value predicted by (27) is . This value is reproduced by the numerical simulation up to the sixth significant figure for all times (Fig. 11, solid line). This agreement, and the fact that the unstable TW is maintained, gives confidence in our numerical procedure.
In a second simulation, a smooth perturbation (of the form with and ) is added to the unstable TW and the result used as initial condition. This choice of perturbation was taken to remain as much as possible within the range of validity of the small gradient hypothesis. After a transient the perturbation grows and the TW is replaced by the phase turbulence state (the winding number remains fixed to ). The corresponding evolution of is shown in Fig. 11 (long-dashed line). The value of the potential increases from to a higher value, and then irregularly oscillates around it. Both the departure and the fluctuation are very small, of the order of times the value of . Simulations with higher precisions confirm that these small discrepancies from the theoretical predictions are not an artifact of our numerics, but should be attributed to the terms with higher gradients which are not included in (23). As a conclusion, the prediction that the phase turbulence dynamics, driven by non-relaxational terms, maintains constant in a value equal to the one for TW is confirmed within a great accuracy.
It is interesting however to study how systematic are the small deviations from the theory. To this end we repeat the launching of the TW with a small perturbation for several values of , for the same value of as before. The prediction is that should be independent of . The inset in Fig. 11 shows that the theoretical value is attained near the BF line, and that as is increased away from the BF line there are very small but systematic discrepancies. The values shown for the potential are time averages of its instantaneous values, and the error bars denote the standard deviation of the fluctuations around the average.
Again for , , we perform another simulation (Fig. 11, short-dashed line) consisting in starting the system in a random Gaussan noise configuration, of amplitude , and letting it to evolve towards the phase turbulence attractor. As in other cases, there is a transient in which is ill-defined since the winding number is constantly changing. After this decreases. This decreasing is not monotonous but presents small fluctuations around a decreasing trend. The decreasing finally stops and remains oscillating around approximately the same value as obtained from the perturbed TW initial condition. The final state has , so that in fact the attractor reached is different from the one in the previous runs () but the difference is the smallest possible and the difference in value of the associated potentials can not be distinguished within the fluctuations of Fig. 11. These observations confirm the idea of a potential which decreases as the system advances towards an attractor, and remains constant there, but at variance with the cases in the non-chaotic region here the decreasing is not perfectly monotonous, and the final value is only approximately constant.
Since the small discrepancies with the theory increase far from the BF line, and since it is known that condition (25) can be obtained from an adiabatic-following of the modulus to the phase that losses accuracy far from the BF line, one is lead to consider the role of adiabatic following on the validity of as a potential. To this end we evaluated along trajectories constructed with the phase obtained from solutions of (1), but with modulus replaced by (25), so enforcing the adiabatic following of the modulus to the phase. No significant improvement was obtained with respect to the cases in which the adiabatic following was not enforced since that, in fact, adiabatic following was quite welll accomplished by the solution of (1). Then it is not the fact that the solutions of (1) do not fulfill (25) exactly, but the absence of higher gradient terms in both (25) and (23) the responsible for the small failures in the behavior of .
Finally, it is interesting to show that the Lyapunov potential can be used as a diagnostic tool for detecting changes in behavior that would be difficult to monitor by observing the complete state of the system. For example the time at which the phase turbulence attractor is reached can be readily identified from the time-behavior of in Fig. 11. More interestingly it can be used to detect the escape from metastable states. For example, Fig. 12 shows for evolution from a Gaussian noise initial condition (). and (, ). The system reaches first a long lived state with not too different from the usual phase turbulent state of . After a long time however the system leaves this metastable state and approaches a more ordered state that can be described [36] as phase turbulent fluctuations around quasiperiodic configurations related to those of [28]. More details about this state will be described elsewhere [36]. What is of interest here is that from Fig. 12 one can easily identify the changes between the different dynamical regimes. In particular the decrease in the fluctuations of near identifies the jump from the first to the second turbulence regimes.
Vi Conclusions and Outlook
The validity of the expressions for the Lyapunov functional of the CGLE found by Graham and coworkers has been numerically tested. The most important limitation is that they were explicitly constructed in a approximation limited to small gradients of modulus and phase. This precludes its use for evolution on attractors such that zeros of and thus phase singularities appear (defect turbulence, bi-chaos, spatio-temporal intermittency). The same problem applies to transient states of evolution towards more regular attractors, if phase singularities appear in this transient (for instance decay of an Eckhaus unstable TW, evolution from random states close to , etc.). A major step forward would be the calculation of the Lyapunov potential for small gradients of the real and imaginary components of , which would be a well behaved expansion despite the presence of phase singularities.
Apart from this, if changes in winding number are avoided, expressions (17), (18), and (23) display the correct properties of a Lyapunov functional: minima on stable attractors, where non-relaxational dynamics maintains it in a constant value, and decreasing value during approach to the attractor. These properties are completely satisfied in the non-chaotic region of parameter space, even in complex situations such as TW competition, as long as large gradients do not appear. It is remarkable that, although the potential is constructed trough an expansion around the TW, its minima identify exactly the remaining TW, its stability, and the non-relaxational terms calculated by substracting the potential terms to (1) give exactly their frequencies. In the phase turbulence regime, however, there are small discrepancies with respect to the theoretical predictions: lack of monotonicity in the approach to the attractor, small fluctuations around the asymptotic value, and small discrepancy between the values of the potential of TW’s and of turbulent configurations, that were predicted to be equal. All these deviations are very small but systematic, and grow as we go deeper in the phase turbulence regime. They can be fixed in principle by calculating more terms in the gradient expansion.
In addition in order to clarify the conceptual status of non-relaxational and non-potential dynamical systems one can ask about the utility of having approximate expressions for the Lyapunov functional of the CGLE. Several applications have been already developped for the case in which (1) is perturbed with random noise. In particular the stationary probability distribution is directly related to , and in addition barriers and escape times from metastable TW have been calculated [12, 30]. In the absence of random noise, should be still useful in stating the nonlinear stability of the different attractors. In practice however there will be limitations in the validity of the predictions, since has been constructed in an expansion which is safe only near one particular attractor (the homogeneous TW).
Once known , powerful statistical mechanics techniques (mean field, renormalization group, etc. ) can in principle be applied to it to obtain information on the static properties of the CGLE (the dynamical properties, as time-correlation functions, would depend also on the non-relaxational terms , as in critical dynamics [15]). Zero-temperature Monte Carlo methods can also be applied to sample the phase turbulent attractors, as an alternative to following the dynamical evolution on it. All those promising developments will have to face first with the complexity of Eqs. (17), (18), and (23). Another use of Lyapunov potentials (the one most used in equilibrium thermodynamics) is the identification of attractors by minimization instead of by solving the dynamical equations. In the case of the TW attractors, solving the Euler-Lagrange equations for the minimization of is in fact more complex than solving directly the CGLE with a TW ansatz. But the limit cycle character of the attractors, and their specific form, is derived, not guessed as when substituting the TW ansatz. For the case of chaotic attractors (as in the phase turbulence regime) minimization of potentials can provide a step towards the construction of inertial manifolds. In this respect it should be useful considering the relationships between the Lyapunov potential of Graham and coworkers and other objects based on functional norms used also to characterize chaotic attractors [37, 38].
Vii Acknowledgments
We acknowledge very helpful discussions on the subject of this paper with R. Graham. We also acknowledge helpful inputs of E. Tirapegui and R. Toral on the general ideas of nonequilibrium potentials. RM and EHG acknowledge financial support from DGYCIT (Spain) Project PB92-0046. R.M. also acknowledges partial support from the Programa de Desarrollo de las Ciencias Básicas (PEDECIBA, Uruguay), the Consejo Nacional de Investigaciones Científicas Y Técnicas (CONICYT, Uruguay) and the Programa de Cooperación con Iberoamérica (ICI, Spain).
• [1] M.C. Cross and P.C. Hohenberg, Rev. Mod. Phys. 65, 851 (1993), and references therein.
• [2] W. van Saarloos and P. Hohenberg, Physica D 56, 303 (1992).
• [3] P. Kolodner, Phys. Rev. E 50, 2731 (1994).
• [4] P. Coullet, L. Gil and F. Roca, Opt. Comm. 73, 403 (1989).
• [5] Y. Kuramoto and S. Koga, Progr. Theor. Phys. Suppl. 66, 1081 (1981).
• [6] B.I. Shraiman, A. Pumir, W. van Saarloos, P.C. Hohenberg, H. Chaté and M. Holen, Physica D 57, 241 (1992).
• [7] H. Chaté, Nonlinearity 7, 185 (1994).
• [8] H. Chaté, in Spatiotemporal Pattern in Nonequilibrium Complex Systems (Addison-Wesley, New York, 1994), ”Santa Fe Institute in the Sciences of Complexity”.
• [9] in New Trends in Nonlinear Dynamics: Nonvariational Aspects (North-Holland, Estella,Spain, 1991), appeared in Physica D, 61.
• [10] R. Graham and T. Tél, Europhys. Lett. 13, 1715 (1990).
• [11] R. Graham and T. Tél, Phys. Rev. A 42, 4661 (1990).
• [12] R. Graham and T. Tél, in Instabilities and Nonequilibrium Structures III, edited by E. Tirapegui and W. Zeller (Reidel, Dordrecht, 1991), p. 125.
• [13] O. Descalzi and R. Graham, Phys. Lett. A 170, 84 (1992).
• [14] O. Descalzi and R. Graham, Z. Phys. B 93, 509 (1994).
• [15] P. C. Hohenberg and B. I. Halperin, Rev. Mod. Phys. 49, 535 (1978).
• [16] J. D. Gunton, M. San Miguel and P. Sahni, in Phase Transitions and Critical Phenomena, edited by C. Domb and J. L. Lebowitz (Academic, London, 1983), Vol. 8.
• [17] R. Graham, in Theory of continuous Fokker-Plank systems, Vol. 1 of Noise in nonlinear dynamical systems, edited by F. Moss and P. V. E. M. Clintock (Cambridge University, Cambridge, 1989), p. 225.
• [18] R. Graham, in XXV Years of Nonequilibrium Statistical Mechanics, Vol. 446 of Lecture Notes in Mathematics, edited by L. Brey, J. Marro, M. Rubí and M. San Miguel (Springer, Berlin, 1995).
• [19] M. Caponeri and S. Ciliberto, Phys. Rev. Lett. 64, 2775 (1990).
• [20] M. Caponeri and S. Ciliberto, Physica D 58, 365 (1992).
• [21] H. Calisto, E. Cerdá and E. Tirapegui, J. of Stat. Phys. 69, 1115 (1992).
• [22] M. San Miguel and F. Sagués, Phys. Rev. A 36, 1883 (1987).
• [23] P. Szépfalusy and T. Tél, Phys. Rev. A 112, 146 (1982).
• [24] S. Rica and E. Tirapegui, Phys. Rev. Lett. 64, 878 (1990).
• [25] T.B.Benjamin and J.E. Feir, J. Fluid Mech. 27, 417 (1967).
• [26] A. Newell, Appl. Math. 15, 157 (1974).
• [27] W. Eckhaus, Studies in nonlinear stability theory (Springer, Berlin, 1965).
• [28] B. Janiaud, A. Pumir, D. Bensimon, V. Croquette, H. Richter and L. Kramer, Physica D 55, 269 (1992).
• [29] D. Egolf and H. Greenside, Phys. Rev. Lett. 74, 1751 (1995).
• [30] O. Descalzi, Ph.D. thesis, Univ. Essen, 1993.
• [31] R. Graham, in Fluctuations, Instabilities and Phase Transitions, edited by T. Riste (Plenum, New York, 1975), p. 270.
• [32] D. Walgraef, G. Dewl and P. Borkmans, Adv. Chem. Phys 49, 311 (1982).
• [33] D. Walgraef, G. Dewl and P. Borkmans, J. Chem. Phys 78, 3043 (1983).
• [34] H. Sakaguchi, Prog. Theor. Phys. 84, 792 (1990).
• [35] S. Chan, J. Chem. Phys. 67, 5755 (1977).
• [36] R. Montagne, E. Hernández-García and M. San Miguel, to be published.
• [37] C. R. Doering, J. D. Gibbon,D. D. Holm and B. Nicolaenko, Nonlinearity 1, 279 (1988).
• [38] M. Bartuccelli, P. Constantin, C. R. Doering, J. D. Gibbon and M. Gisselfält, Physica D 44, 421 (1990).
Figure 1: Regions of the parameter -space () for the CGLE displaying different kinds of regular and chaotic behavior. Two analytically obtained lines,the Benjamin-Feir line (B-F line) and the line, are also shown.
Figure 2: Relaxation to the simple attractor for . The parameter values are . The initial condition is a TW of arbitrary wavenumber and arbitrary amplitude .
Figure 3: Time evolution of on the line. The parameter values are . The initial condition is a a Gaussian noise of amplitude . The system evolved towards a TW attractor of wavenumber .
Figure 4: Time evolution of in the non-chaotic region for . The initial condition is an Eckhaus stable TW of wavenumber perturbed by random noise of small amplitude .
Figure 5: Same as Fig. 4 but for . The initial condition for consists of two Eckhaus stable TW of different wavenumbers () joined together. The inset shows the real part of this initial configuration.
Figure 6: Same as Fig. 4 but for . The initial condition is a random noise of amplitude .
Figure 7: Same as Fig. 4 but for . The initial condition is an Eckhaus-unstable TW () slightly perturbed by noise.
Figure 8: The function as a function of . The parameter values are . The values of are indicated by dashed lines. The diamont indicates the point taken as initial condition for the simulation in Fig. 9
Figure 9: Time evolution of for . The initial condition is an Eckhaus-unstable TW () slightly perturbed by noise.
Figure 10: Time evolution of in the STI region (). The initial condition is an Eckhaus-unstable TW () slightly perturbed by noise. The winding number evolution is plotted in the inset.
Figure 11: Time evolution of in the Phase Turbulence region (). Solid line: evolution of a unperturbed unstable traveling wave. Dotted line: evolution from noise. Dashed line: evolution from a slightly perturbed traveling wave. The inset shows final average values of as a function of the parameter (). The error bars indicate the standard deviation of the fluctuations around the average value.
Figure 12: Same as Fig. 11 but for . The initial condition was random noise with an amplitude , time step 0.005. In this case 2048 Fourier modes were taken into account. Note the transition occurring arround to a less fluctuating state.
|
416d4fde13cb497b | 6. Randell L. Mills, Exact Classical Quantum‐Mechanical Solutions for One‐ through Twenty‐Electron Atoms
$25.00 each
Volume 18: Pages 321-361, 2005
Exact Classical QuantumMechanical Solutions for One through TwentyElectron Atoms
Randell L. Mills
BlackLight Power Inc., 493 Old Trenton Road, Cranbury, New Jersey 08512 U.S.A.
It is true that the Schrödinger equation can be solved exactly for the hydrogen atom, although it is not true that the result is the exact solution of the hydrogen atom. Electron spin is missed entirely, and there are many internal inconsistencies and nonphysical consequences that do not agree with experimental results. The Dirac equation does not reconcile this situation. Many additional shortcomings arise, such as instability to radiation, negative kinetic energy states, intractable infinities, virtual particles at every point in space, the Klein paradox, violation of Einstein causality, and “spooky” action at a distance. Despite its successes, quantum mechanics (QM) has remained mysterious to all who have encountered it. Starting with Bohr and progressing into the present, the departure from intuitive physical reality has widened. The connection between QM and reality is more than just a “philosophical” issue. It reveals that QM is not a correct or complete theory of the physical world and that inescapable internal inconsistencies and incongruities arise when attempts are made to treat it as a physical as opposed to a purely mathematical “tool.” Some of these issues are discussed in a review by Laloë [Am. J. Phys. 69, 655 (2001)]. But QM has severe limitations even as a tool. Beyond oneelectron atoms, multielectronatom quantummechanical equations cannot be solved except by approximation methods involving adjustableparameter theories (perturbation theory, variational methods, selfconsistent field method, multiconfiguration HartreeFock method, multiconfiguration parametric potential method, 1/Z expansion method, multiconfiguration DiracFock method, electron correlation terms, QED terms, etc.), all of which contain assumptions that cannot be physically tested and are not consistent with physical laws. In an attempt to provide some physical insight into atomic problems and starting with the same essential physics as Bohr of e moving in the Coulombic field of the proton and the wave equation as modified after Schrödinger, a classical approach was explored, yielding a model that is remarkably accurate and provides insight into physics on the atomic level [R.L. Mills, Phys. Essays 16, 433 (2003); 17, 342 (2004); The Grand Unified Theory of Classical Quantum Mechanics (BlackLight Power, Inc., Cranbury, NJ, 2005)]. Physical laws and intuition are restored when dealing with the wave equation and quantummechanical problems. Specifically, a theory of classical quantum mechanics (CQM) was derived from first principles that successfully applies physical laws on all scales. Rather than using the postulated Schrödinger boundary condition “ Ψ → 0 as r → ∞,” which leads to a purely mathematical model of the electron, the constraint is based on experimental observation. Using Maxwell's equations, the classical wave equation is solved with the constraint that the bound (n = 1)state electron cannot radiate energy. The electron must be extended rather than a point. On this basis, with the assumption that physical laws including Maxwell's equation apply to bound electrons, the hydrogen atom was solved exactly from first principles. The remarkable agreement across the spectrum of experimental results indicates that this is the correct model of the hydrogen atom. In this paper the physical approach was applied to multielectron atoms that were solved exactly, disproving the deepseated view that such exact solutions cannot exist according to QM. The general solutions for one through twentyelectron atoms are given. The predictions of the ionization energies are in remarkable agreement with the experimental values known for 400 atoms and ions.
Keywords: Maxwell's equations, nonradiation, quantum theory, special and general relativity, ionization energies, one through twentyelectron atom solutions
Received: April 28, 2004; Published online: December 15, 2008 |
a80acc9d1061aae0 |
Click here to join and contribute—free
Atmospheric reentry
From Citizendium
Jump to: navigation, search
Main Article
Related Articles [?]
Bibliography [?]
External Links [?]
Citable Version [?]
Atmospheric reentry is the process by which vehicles that are outside the atmosphere of a planet can enter that atmosphere and reach the planetary surface intact. Vehicles that undergo this process include spacecraft from orbit, as well as suborbital ballistic missile reentry vehicles. Typically this process requires special methods to protect against aerodynamic heating. Various advanced technologies have been developed to enable atmospheric reentry and flight at extreme velocities.
Mars Exploration Rover (MER) aeroshell, artistic rendition.
Apollo Command Module flying at a high angle of attack for lifting entry, artistic rendition.
The technology of atmospheric reentry was a consequence of the Cold War. Ballistic missiles and nuclear weapons were legacies of World War II left to both the Soviet Union and the United States. Both nations initiated massive research and development programs to further the military capability of those technologies. However before a missile-delivered nuclear weapon could be practical there lacked an essential ingredient: an atmospheric reentry technology. In theory, the nation first developing reentry technology had a decisive military advantage, yet it was unclear whether the technology was physically possible. Basic calculations showed the kinetic energy of a nuclear warhead returning from orbit was sufficient to completely vaporize the warhead. Despite these calculations the military stakes were so high that simply assuming atmospheric reentry's impossibility was unacceptable, and it was known that meteorites were able to successfully reach ground level. Consequently a high-priority program was initiated to develop reentry technology. Atmospheric reentry was successfully developed, which made possible nuclear-armed intercontinental ballistic missiles.
The technology was further pushed forward for human use by another consequence of the Cold War. The Soviet Union saw a propaganda and military advantage in pursuing space exploration. To the embarrassment of the United States, the Soviet Union orbited an artificial satellite, followed by a series of other technological firsts that culminated with a Soviet cosmonaut orbiting the Earth and returning safely to Earth. Many of these achievements were enabled through atmospheric reentry technology. The United States saw the Soviet Union's achievements as a challenge to its national pride as well as a threat to national security. Consequently the United States followed the Soviet Union's initiative and increased its nascent Space Program thus beginning the Space Race.
Terminology, definitions and jargon
For more information, see: glossary of atmospheric reentry.
Over the decades since the 1950s, a rich technical jargon has grown around the engineering of vehicles designed to enter planetary atmospheres. It is recommended that the reader review the jargon glossary before continuing with this article on atmospheric reentry.
Blunt body entry vehicles
Various reentry shapes (NASA)
These four shadowgraph images represent early reentry-vehicle concepts. A shadowgraph is a process that makes visible the disturbances that occur in a fluid flow at high velocity, in which light passing through a flowing fluid is refracted by the density gradients in the fluid resulting in bright and dark areas on a screen placed behind the fluid.
H. Julian Allen and A. J. Eggers, Jr. of the National Advisory Committee for Aeronautics (NACA) made the counterintuitive discovery in 1952 that a blunt shape (high drag) made the most effective heat shield. From simple engineering principles, Allen and Eggers showed that the heat load experienced by an entry vehicle was inversely proportional to the drag coefficient, i.e. the greater the drag, the less the heat load. Through making the reentry vehicle blunt, the shock wave and heated shock layer were pushed forward, away from the vehicle's outer wall. Since most of the hot gases were not in direct contact with the vehicle, the heat energy would stay in the shocked gas and simply move around the vehicle to later dissipate into the atmosphere.
The Allen and Eggers discovery, though initially treated as a military secret, was eventually published in 1958.[1] The Blunt Body Theory made possible the heat shield designs that were embodied in the Mercury, Gemini and Apollo space capsules, enabling astronauts to survive the fiery reentry into Earth's atmosphere.
Entry vehicle shapes
There are several basic shapes used in designing entry vehicles:
Sphere or spherical section
The simplest axisymmetric shape is the sphere or spherical section. This can either be a complete sphere or a spherical section forebody with a converging conical afterbody. The sphere or spherical section's aerodynamics are easy to model analytically using Newtonian impact theory. Likewise, the spherical section's heat flux can be accurately modeled with the Fay-Riddell equation.[2] The static stability of a spherical section is assured if the vehicle's center-of-mass is upstream from the center-of-curvature (dynamic stability is more problematic). Pure spheres have no lift. However by flying at an angle-of-attack, a spherical section has modest aerodynamic lift thus providing some cross-range capability and widening its entry corridor. In the late 1950s and early 1960s, high speed computers were not yet available and CFD was still embryonic. Because the spherical section was susceptible to closed form analysis, that geometry became the default for conservative design. Consequently, manned capsules of that era were based upon the spherical section. Pure spherical entry vehicles were used in the early Soviet Vostok. The most famous example of a spherical section entry vehicle was the Apollo Command Module (Apollo-CM), using a spherical section forebody heatshield with a converging conical afterbody. The Apollo-CM (AS-501) flew a lifting-entry with a hypersonic trim angle-of-attack of −27° (0° is blunt-end first) to yield an average L/D of 0.368.[3] This angle-of-attack was achieved by precisely offsetting the vehicle's center-of-mass from its axis-of-symmetry. Other examples of the spherical section geometry as manned capsules are Soyuz/Zond, Gemini and Mercury.
Galileo Probe during final assembly
The sphere-cone is a spherical section with a frustum attached. The sphere-cone's dynamic stability is typically better than that of a spherical section. With a sufficiently small half-angle and properly placed center-of-mass, a sphere-cone can provide aerodynamic stability from Keplerian entry to surface impact. The original American sphere-cone aeroshell was the Mk-2 RV which was developed in 1955 by the General Electric Corp. The Mk-2's design was derived from blunt body theory and used a radiatively cooled thermal protection system (TPS) based upon a metallic heat shield (the different TPS types are later described in this article). The Mk-2 had significant defects as a weapon delivery system, i.e. it loitered too long in the upper atmosphere due to its lower ballistic coefficient and also trailed a stream of vaporized metal making it very visible to radar. These defects made the Mk-2 overly susceptible to anti-ballistic missile (ABM) systems. Consequently an alternative sphere-cone RV to the Mk-2 was developed by General Electric.
Mk-6 RV, Cold War weapon and ancestor to most of NASA's entry vehicles
This new RV was the Mk-6 which used a non-metallic ablative TPS (nylon phenolic). This new TPS was so effective as a reentry heat shield that significantly reduced bluntness was possible. However the Mk-6 was a huge RV with an entry mass of 3360 kg, a length of 3.1 meters and a half-angle of 12.5°. Subsequent advances in nuclear weapon and ablative TPS design allowed RVs to become significantly smaller with a further reduced bluntness ratio compared to the Mk-6. Since the 1960s, the sphere-cone has become the preferred geometry for modern ICBM RVs with typical half-angles being between 10° to 11°.
"Discoverer" type reconnaissance satellite film Recovery Vehicle (RV)
Reconnaissance satellite RVs (recovery vehicles) also used a sphere-cone shape and were the first American example of a non-munition entry vehicle (Discoverer-I, launched on 28 February 1959). The sphere-cone was later used for space exploration missions to other celestial bodies or for return from open space, e.g. Stardust probe. Unlike with military RVs, the advantage of the blunt body's lower TPS mass remained with space exploration entry vehicles like the Galileo Probe with a half angle of 45° or the Viking aeroshell with a half angle of 70°. Space exploration sphere-cone entry vehicles have landed on the surface or entered the atmospheres of Mars, Venus, Jupiter and Titan.
The biconic is a sphere-cone with an additional frustum attached. The biconic offers a significantly improved L/D ratio. A biconic designed for Mars aerocapture typically has an L/D of approximately 1.0 compared to an L/D of 0.368 for the Apollo-CM. The higher L/D makes a biconic shape better suited for transporting people to Mars due to the lower peak deceleration. Arguably, the most significant biconic ever flown was the Advanced Maneuverable Reentry Vehicle (AMaRV). Four AMaRVs were made by the McDonnell-Douglas Corp. and represented a quantum leap in RV sophistication. Three of the AMaRVs were launched by Minuteman-1 ICBMs on 20 December 1979, 8 October 1980 and 4 October 1981. AMaRV had an entry mass of approximately 470 kg, a nose radius of 2.34 cm, a forward frustum half-angle of 10.4°, an inter-frustum radius of 14.6 cm, aft frustum half angle of 6°, and an axial length of 2.079 meters. No accurate diagram or picture of AMaRV has ever appeared in the open literature. However a schematic sketch of an AMaRV-like vehicle along with trajectory plots showing hairpin turns has been published.[4]
The middle vehicle in an artistic rendition for the X-33 proposal was derived from AMaRV.
AMaRV's attitude was controlled through a split body flap (also called a "split-windward flap") along with two yaw flaps mounted on the vehicle's sides. Hydraulic actuation was used for controlling the flaps. AMaRV was guided by a fully autonomous navigation system designed for evading anti-ballistic missile (ABM) interception. The McDonnell Douglas DC-X (also a biconic) was essentially a scaled up version of AMaRV. AMaRV and the DC-X also served as the basis for an unsuccessful proposal for what eventually became the Lockheed Martin X-33. Amongst aerospace engineers, AMaRV has achieved legendary status along side such technological marvels as the SR-71 Blackbird and the N-1 rocket.
Non-axisymmetric shapes
Non-axisymmetric shapes have been used for manned entry vehicles. One example is the winged orbit vehicle that uses a delta wing for maneuvering during descent much like a conventional glider. This approach has been used by the American Space Shuttle and the Soviet Buran. The lifting body is another entry vehicle geometry and was used with the X-23 PRIME (Precision Recovery Including Maneuvering Entry) vehicle.
The FIRST (Fabrication of Inflatable Re-entry Structures for Test) system was an Aerojet proposal for an inflated-spar Rogallo wing made up from Inconel wire cloth impregnated with silicone rubber and Silicon Carbide dust. FIRST was proposed in both one-man and six man versions, used for emergency escape and reentry of stranded space station crews, and was based on an earlier unmanned test program that resulted in a partially successful reentry flight from space (the launcher nose cone fairing hung up on the material, dragging it too low and fast for the TPS, but otherwise it appears the concept would have worked, even with the fairing dragging it, the test article flew stably on reentry until burn-through).
The proposed MOOSE system would have used a one-man inflatable ballistic capsule as an emergency astronaut entry vehicle. This concept was carried further by the Douglas Paracone project. While these concepts were unusual, the inflated shape on reentry was in fact axisymmetric.
Shock layer gas physics
An approximate rule-of-thumb used by heat shield designers for estimating peak shock layer temperature is to assume the air temperature in Kelvin to be equal to the entry speed in meters per second. For example, a spacecraft entering the atmosphere at 7.8 km/s would experience a peak shock layer temperature of 7800 K. This method of estimation is a mathematical accident and a consequence of peak heat flux for terrestrial entry typically occurring around 60 km altitude.
It is clear that 7800 K is incredibly hot (the surface of the sun, or photosphere, is only 6000 K). For such high temperatures, the air in the shock layer will break down chemically (dissociate) and also become ionized. This chemical dissociation necessitates various physical models to describe the air's thermal and chemical properties. There are four basic physical models of a gas that are important to aeronautical engineers who design heat shields:
Perfect gas model
Almost all aeronautical engineers are taught the perfect (ideal) gas model during their undergraduate education. Most of the important perfect gas equations along with their corresponding tables and graphs are shown in NACA Report 1135.[5] Excerpts from NACA Report 1135 often appear in the appendices of thermodynamics textbooks and are familiar to most aeronautical engineers who design supersonic aircraft.
Perfect gas theory is elegant and extremely useful for designing aircraft but assumes the gas is chemically inert. From the standpoint of aircraft design, air can be assumed to be inert for temperatures less than 550 K at one atmosphere pressure. Perfect gas theory begins to break down at 550 K and is not usable at temperatures greater than 2000 K. For temperatures greater than 2000 K, a heat shield designer must use a real gas model.
Real (equilibrium) gas model
The real gas equilibrium model is normally taught to aeronautical engineers studying towards a master's degree. Not surprisingly, it is a common error for a bachelor's-level engineer to incorrectly use perfect-gas theory on a hypersonic design. An entry vehicle's pitching moment can be significantly influenced by real-gas effects. Both the Apollo-CM and the Space Shuttle were designed using incorrect pitching moments determined through inaccurate real-gas modeling. The Apollo-CM's trim-angle angle-of-attack was higher than originally estimated, resulting in a narrower lunar return entry corridor. The actual aerodynamic center of the Columbia was upstream from the calculated value due to real-gas effects. On Columbia’s maiden flight (STS-1), astronauts John W. Young and Robert Crippen had some anxious moments during reentry when there was concern about losing control of the vehicle.
An equilibrium real-gas model assumes that a gas is chemically reactive but also assumes all chemical reactions have had time to complete and all components of the gas have the same temperature (this is called thermodynamic equilibrium). When air is processed by a shock wave, it is superheated by compression and chemically dissociates through many different reactions (contrary to myth, friction is not the main cause of shock-layer heating). The distance from the shock wave to the stagnation point on the entry vehicle's leading edge is called shock wave stand off. An approximate rule of thumb for shock wave standoff distance is 0.14 times the nose radius. One can estimate the time of travel for a gas molecule from the shock wave to the stagnation point by assuming a free stream velocity of 7.8 km/s and a nose radius of 1 meter, i.e. time of travel is about 18 microseconds. This is roughly the time required for shock-wave-initiated chemical dissociation to approach chemical equilibrium in a shock layer for a 7.8 km/s entry into air during peak heat flux. Consequently, as air approaches the entry vehicle's stagnation point, the air effectively reaches chemical equilibrium thus enabling an equilibrium model to be usable. For this case, most of the shock layer between the shock wave and leading edge of an entry vehicle is chemically reacting and not in a state of equilibrium. The Fay-Riddell equation, which is of extreme importance towards modeling heat flux, owes its validity to the stagnation point being in chemical equilibrium. It should be emphasized that the time required for the shock layer gas to reach equilibrium is strongly dependent upon the shock layer's pressure. For example, in the case of the Galileo Probe's entry into Jupiter's atmosphere, the shock layer was mostly in equilibrium during peak heat flux due to the very high pressures experienced (this is counter intuitive given the free stream velocity was 39 km/s during peak heat flux) .
Determining the thermodynamic state of the stagnation point is more difficult under an equilibrium gas model than a perfect gas model. Under a perfect gas model, the ratio of specific heats (also called "isentropic exponent", "adiabatic index", "gamma" or "kappa") is assumed to be constant along with the gas constant. For a real gas, the ratio of specific heats can wildly oscillate as a function of temperature. Under a perfect gas model there is an elegant set of equations for determining thermodynamic state along a constant entropy stream line called the isentropic chain. For a real gas, the isentropic chain is unusable and a Mollier diagram would be used instead for manual calculation. However graphical solution with a Mollier diagram is now considered obsolete with modern heat shield designers using computer programs based upon a digital lookup table (another form of Mollier diagram) or a chemistry based thermodynamics program. The chemical composition of a gas in equilibrium with fixed pressure and temperature can be determined through the Gibbs free energy method. Gibbs free energy is simply the total enthalpy of the gas minus its total entropy times temperature. A chemical equilibrium program normally does not require chemical formulas or reaction rate equations. The program works by preserving the original elemental abundances specified for the gas and varying the different molecular combinations of the elements through numerical iteration until the lowest possible Gibbs free energy is calculated (a Newton-Raphson method is the usual numerical scheme). The data base for a Gibbs free energy program comes from spectroscopic data used in defining partition functions. Among the best equilibrium codes in existence is the program Chemical Equilibrium with Applications (CEA) which was written by Bonnie J. McBride and Sanford Gordon at NASA Lewis (now renamed "NASA Glenn Research Center"). Other names for CEA are the "Gordon and McBride Code" and the "Lewis Code". CEA is quite accurate up to 10,000 K for planetary atmospheric gases but unusable beyond 20,000 K (double ionization is not modeled). CEA can be downloaded from the Internet along with full documentation and will compile on Linux under the G77 Fortran compiler.
Real (non-equilibrium) gas model
A non-equilibrium real gas model is the most accurate model of a shock layer's gas physics but is more difficult to solve than an equilibrium model. The simplest non-equilibrium model is the Lighthill-Freeman model.[6][7] The Lighthill-Freeman model initially assumes a gas made up of a single diatomic species susceptible to only one chemical formula and its reverse, e.g. N2 → N + N and N + N → N2 (dissociation and recombination). Because of its simplicity, the Lighthill-Freeman model is a useful pedagogical tool but is unfortunately too simple for modeling non-equilibrium air. Air is typically assumed to have a mole fraction composition of 0.7812 molecular nitrogen, 0.2095 molecular oxygen and 0.0093 argon. The simplest real gas model for air is the five species model which is based upon N2, O2, NO, N and O. The five species model assumes no ionization and ignores trace species like carbon dioxide.
When running a Gibbs free energy equilibrium program, the iterative process from the originally specified molecular composition to the final calculated equilibrium composition is essentially random and not time accurate. With a non-equilibrium program, the computation process is time accurate and follows a solution path dictated by chemical and reaction rate formulas. The five species model has 17 chemical formulas (34 when counting reverse formulas). The Lighthill-Freeman model is based upon a single ordinary differential equation and one algebraic equation. The five species model is based upon 5 ordinary differential equations and 17 algebraic equations. Because the 5 ordinary differential equations are loosely coupled, the system is numerically "stiff" and difficult to solve. The five species model is only usable for entry from low Earth orbit where entry velocity is approximately 7.8 km/s. For lunar return entry of 11 km/s, the shock layer contains a significant amount of ionized nitrogen and oxygen. The five species model is no longer accurate and a twelve species model must be used instead. High speed Mars entry which involves a carbon dioxide, nitrogen and argon atmosphere is even more complex requiring a 19 species model.
An important aspect of modeling non-equilibrium real gas effects is radiative heat flux. If a vehicle is entering an atmosphere at very high speed (hyperbolic trajectory, lunar return) and has a large nose radius then radiative heat flux can dominate TPS heating. Radiative heat flux during entry into an air or carbon dioxide atmosphere typically comes from unsymmetric diatomic molecules, e.g. cyanogen (CN), carbon monoxide, nitric oxide (NO), single ionized molecular nitrogen, etc. These molecules are formed by the shock wave dissociating ambient atmospheric gas followed by recombination within the shock layer into new molecular species. The newly formed diatomic molecules initially have a very high vibrational temperature that efficiently transforms the vibrational energy into radiant energy, i.e. radiative heat flux. The whole process takes place in less than a millisecond which makes modeling a challenge. The experimental measurement of radiative heat flux (typically done with shock tubes) along with theoretical calculation through the unsteady Schrödinger equation are among the more esoteric aspects of aerospace engineering. Most of the aerospace research work related to understanding radiative heat flux was done in the 1960s but largely discontinued after conclusion of the Apollo Program. Radiative heat flux in air was just sufficiently understood to insure Apollo's success. However radiative heat flux in carbon dioxide (Mars entry) is still barely understood and will require major research.
Frozen gas model
The frozen gas model describes a special case of a gas that is not in equilibrium. The name "frozen gas" is misleading. A frozen gas is not "frozen" like ice is frozen water. Rather a frozen gas is "frozen" in time (all chemical reactions are assumed to have stopped). Chemical reactions are normally driven by collisions between molecules. If gas pressure is slowly reduced such that chemical reactions can continue then the gas can remain in equilibrium. However it is possible for gas pressure to be so suddenly reduced that almost all chemical reactions stop. For that situation the gas is considered frozen.
The distinction between equilibrium and frozen is important because it is possible for a gas such as air to have significantly different properties (speed-of-sound, viscosity, etc.) for the same thermodynamic state, e.g. pressure and temperature. Frozen gas can be a significant issue in the wake behind an entry vehicle. During reentry, free stream air is compressed to high temperature and pressure by the entry vehicle's shock wave. Non-equilibrium air in the shock layer is then transported past the entry vehicle's leading side into a region of rapidly expanding flow that causes freezing. The frozen air can then be entrained into a trailing vortex behind the entry vehicle. Correctly modeling the flow in the wake of an entry vehicle is very difficult. TPS heating in the vehicle's afterbody is usually not very high but the geometry and unsteadiness of the vehicle's wake can significantly influence aerodynamics (pitching moment) and particularly dynamic stability.
Thermal protection systems
The type of heat shield that best protects against high heat flux is the ablative heat shield. The ablative heat shield functions by lifting the hot shock layer gas away from the heat shield's outer wall (creating a cooler boundary layer) through blowing. The overall process of reducing the heat flux experienced by the heat shield's outer wall is called blockage. Ablation causes the TPS layer to char, melt, and sublimate through the process of pyrolysis. The gas produced by pyrolysis is what drives blowing and causes blockage of convective and catalytic heat flux. Pyrolysis can be measured in real time using thermogravimetric analysis, so that the ablative performance can be evaluated.[8] Ablation can also provide blockage against radiative heat flux by introducing carbon into the shock layer thus making it optically opaque. Radiative heat flux blockage was the primary thermal protection mechanism of the Galileo Probe TPS material (carbon phenolic). Carbon phenolic was originally developed as a rocket nozzle throat material (used in the Space Shuttle Solid Rocket Booster) and for RV nose tips. Thermal protection can also be enhanced in some TPS materials through coking. Coking is the process of forming solid carbon on the outer char layer of the TPS. TPS coking was discovered accidentally during development of the Apollo-CM TPS material (Avcoat 5026-39).
The thermal conductivity of a TPS material is proportional to the material's density. Carbon phenolic is a very effective ablative material but also has high density which is undesirable. If the heat flux experienced by an entry vehicle is insufficient to cause pyrolysis then the TPS material's conductivity could allow heat flux conduction into the TPS bondline material thus leading to TPS failure. Consequently for entry trajectories causing lower heat flux, carbon phenolic is sometimes inappropriate and lower density TPS materials such as the following examples can be better design choices:
"SLA" in SLA-561V stands for "Super Light weight Ablator". SLA-561V is a proprietary ablative made by Lockheed Martin that has been used as the primary TPS material on all of the 70 degree sphere-cone entry vehicles sent by NASA to Mars. SLA-561V begins significant ablation at a heat flux of approximately 75 W/cm² but will fail for heat fluxes greater than 300 W/cm². The Mars Science Laboratory (MSL) aeroshell TPS is currently designed to withstand a peak heat flux of 234 W/cm². The peak heat flux experienced by the Viking-1 aeroshell which landed on Mars was 21 W/cm². For Viking-1, the TPS acted as a pure thermal insulator and never experienced significant ablation. Viking-1 was the first Mars lander and based upon a very conservative design. The Viking aeroshell had a base diameter of 3.54 meters (the largest yet used on Mars). SLA-561V is applied by packing the ablative material into a honeycomb core that is pre-bonded to the aeroshell's structure thus enabling construction of a large heat shield.
Phenolic Impregnated Carbon Ablator (PICA) was developed by NASA Ames Research Center and was the primary TPS material for the Stardust aeroshell[9]. Because the Stardust sample-return capsule was the fastest man-made object to reenter Earth's atmosphere (~12.4 km/sec / ~28,000 mph relative velocity at 135 km altitude), PICA was an enabling technology for the Stardust mission. (For reference, the Stardust reentry was faster than the Apollo Mission capsules and 70% faster than the reentry velocity of the Shuttle.) PICA is a modern TPS material and has the advantages of low density (much lighter than carbon phenolic) coupled with efficient ablative capability at high heat flux. Stardust's heat shield (0.81 m base diameter) was manufactured from a single monolithic piece sized to withstand a nominal peak heating rate of 1200 W/cm2. PICA is a good choice for ablative applications such as high-peak-heating conditions found on sample-return missions or lunar-return missions. PICA's thermal conductivity is lower than other high-heat-flux ablative materials, such as conventional carbon phenolics.
DS/2 aeroshell, a classic 45° sphere-cone with spherical section afterbody enabling aerodynamic stability from atmospheric entry to surface impact
Silicone Impregnated Reuseable Ceramic Ablator (SIRCA) was also developed at NASA Ames Research Center and was used on the Backshell Interface Plate (BIP) of the Mars Pathfinder and Mars Exploration Rover (MER) aeroshells. The BIP was at the attachment points between the aeroshell's backshell (also called the "afterbody" or "aft cover") and the cruise ring (also called the "cruise stage"). SIRCA was also the primary TPS material for the unsuccessful Deep Space 2 (DS/2) Mars probes with their 0.35 m base diameter aeroshells. SIRCA is a monolithic, insulative material that can provide thermal protection through ablation. It is the only TPS material that can be machined to custom shapes and then applied directly to the spacecraft. There is no post-processing, heat treating, or additional coatings required (unlike current Space Shuttle tiles). Since SIRCA can be machined to precise shapes, it can be applied as tiles, leading edge sections, full nose caps, or in any number of custom shapes or sizes. SIRCA has been demonstrated in BIP applications but not yet as a forebody TPS material.[10]
Early research on ablation technology in the USA was centered at NASA's Ames Research Center located at Moffett Field, California. Ames Research Center was ideal, since it had numerous wind tunnels capable of generating varying wind velocities. Initial experiments typically mounted a mock-up of the ablative material to be analyzed within a hypersonic wind tunnel.[11]
Thermal soak
Thermal soak is a part of almost all TPS schemes. For example, an ablative heat shield loses most of its thermal protection effectiveness when the outer wall temperature drops below the minimum necessary for pyrolysis. From that time to the end of the heat pulse, heat from the shock layer soaks into the heat shield's outer wall and would eventually convect to the payload. This outcome is prevented by ejecting the heat shield (with its heat soak) prior to the heat convecting to the inner wall.
Thermal soak TPS is intended to shield mainly against heat load and not against a high peak heat flux (a long duration heat pulse of low intensity is assumed for the TPS design). The Space Shuttle orbit vehicle was designed with a reusable heat shield based upon a thermal soak TPS. It should be emphasized that the tradeoff for TPS reusability is an inability to withstand a high heat flux, e.g. a Space Shuttle TPS would not be practical as a primary thermal protection for lunar return. A Space Shuttle's underside is coated with thousands of tiles made of silica foam, which are intended to survive multiple reentries with only minor repairs between missions. Fabric sheets known as gap fillers are inserted between the tiles where necessary. These gap fillers provide for a snug fit between separate tiles while allowing for thermal expansion. When a Space Shuttle lands, a significant amount of heat is stored in the TPS. Shortly after landing, a ground-support cooling unit connects to the Space Shuttle's internal Freon coolant loop to remove heat soaked in the TPS and orbiter structure.
LI-900 is a rigid black tile on the Space Shuttle. (Shuttle shown is Atlantis.)
Typical Space Shuttle's TPS tiles (LI-900) have remarkable thermal protection properties but are relatively brittle and break easily. An LI-900 tile exposed to a temperature of 1000 K on one side will remain merely warm to the touch on the other side. An impressive stunt that can be performed with a cube of LI-900 is to remove it glowing white hot from a furnace and then hold it with one's bare fingers without discomfort along the cube's edges.
Passively cooled
In some early ballistic missile RVs, e.g. the Mk-2 and the sub-orbital Mercury spacecraft, radiatively cooled TPS were used to initially absorb heat flux during the heat pulse and then after the heat pulse, radiate and convect the stored heat back into the atmosphere. Unfortunately, the earlier version of this technique required a considerable quantity of metal TPS (e.g. titanium, beryllium, copper, etc.), adding greatly to the vehicle's mass. Consequently ablative and thermal soak TPS have become preferable.
The Mercury Capsule design (shown with escape tower) originally used a radiatively cooled TPS but was later converted to an ablative TPS
Radiatively cooled TPS can still be found on modern entry vehicles but Reinforced Carbon-Carbon (also called RCC or carbon-carbon) is normally used instead of metal. RCC is the TPS material on the leading edges of the Space Shuttle's wings. RCC was also proposed as the leading edge material for the X-33. Carbon is the most refractory material known with a one atmosphere sublimation temperature of 3825 °C for graphite. This high temperature made carbon an obvious choice as a radiatively cooled TPS material. Disadvantages of RCC are that it is currently very expensive to manufacture and lacks impact resistance.
Some high-velocity aircraft, such as the SR-71 Blackbird and Concorde, had to deal with heating similar to that experienced by spacecraft but at much lower intensity. Studies of the SR-71's titanium skin revealed the metal structure was restored to its original strength through annealing due to aerodynamic heating. In the case of Concorde the aluminium nose was permitted to reach a maximum operating temperature of 127 °C (typically 180 °C warmer than the sub-zero ambient air); the metallurgical implications associated with the peak temperature was a significant factor determining the top speed of the aircraft.
A radiatively cooled TPS for an entry vehicle is often called a "hot metal TPS". Early TPS designs for the Space Shuttle called for a hot metal TPS based upon titanium shingles. Unfortunately the earlier Shuttle TPS concept was rejected because it was incorrectly believed a silica tile based TPS offered less expensive development and manufacturing costs. A titanium shingle TPS was again proposed for the unsuccessful X-33 Single-Stage to Orbit (SSTO) prototype.
Recently, newer radiatively cooled TPS materials have been developed that could be superior to RCC. Referred to by their prototype vehicle "SHARP" (Slender Hypervelocity Aerothermodynamic Research Probe), these TPS materials have been based upon substances such as zirconium diboride and hafnium diboride. SHARP TPS have suggested performance improvements allowing for sustained Mach 7 flight at sea level, Mach 11 flight at 100,000 ft altitudes and significant improvements for vehicles designed for continuous hypersonic flight. SHARP TPS materials enable sharp leading edges and nose cones to greatly reduce drag for air breathing combined cycle propelled space planes and lifting bodies. SHARP materials have exhibited effective TPS characteristics from zero to more than 2000 °C, with melting points over 3500 °C . They are structurally stronger than RCC thus not requiring structural reinforcement with materials such as Inconel. SHARP materials are extremely efficient at re-radiating absorbed heat thus eliminating the need for additional TPS behind and between SHARP materials and conventional vehicle structure. NASA initially funded (and discontinued) a multi-phase R&D program through the University of Montana in 2001 to test SHARP materials on test vehicles.[12][13]
Actively cooled
Various advanced reusable spacecraft and hypersonic aircraft designs have been proposed to employ heat shields made from temperature-resistant metal alloys that incorporated a refrigerant or cryogenic fuel circulating through them. Such a TPS concept was proposed for the X-30 National Aerospace Plane (NASP). The NASP was supposed to have been a scramjet powered hypersonic aircraft but failed in development.
In the early 1960s various TPS systems were proposed to use water or other cooling liquid sprayed into the shock layer. Such concepts never got past the proposal phase since ordinary ablative TPS is much more reliable and efficient.
Entry vehicle design considerations
There are four critical parameters considered when designing a vehicle for atmospheric entry:
1. Peak heat flux
2. Heat load
3. Peak deceleration
4. Peak dynamic pressure
Peak heat flux and dynamic pressure selects the TPS material. Heat load selects the thickness of the TPS material stack. Peak deceleration is of major importance for manned missions. The upper limit for manned return to Earth from Low Earth Orbit (LEO) or lunar return is 10 Gs. For Martian atmospheric entry after long exposure to zero gravity, the upper limit is 4 Gs. Peak dynamic pressure can also influence the selection of the outermost TPS material if spallation is an issue.
Starting from the principle of conservative design, the engineer typically considers two worst case trajectories, the undershoot and overshoot trajectories. The overshoot trajectory is typically defined as the shallowest allowable entry velocity angle prior to atmospheric skip-off. The overshoot trajectory has the highest heat load and sets the TPS thickness. The undershoot trajectory is defined by the steepest allowable trajectory. For manned missions the steepest entry angle is limited by the peak deceleration. The undershoot trajectory also has the highest peak heat flux and dynamic pressure. Consequently the undershoot trajectory is the basis for selecting the TPS material. There is no "one size fits all" TPS material. A TPS material that is ideal for high heat flux may be too conductive (too dense) for a long duration heat load. A low density TPS material might lack the tensile strength to resist spallation if the dynamic pressure is too high. A TPS material can perform well for a specific peak heat flux but fail catastrophically for the same peak heat flux if the wall pressure is significantly increased (this happened with NASA's R-4 test spacecraft).[14] Older TPS materials tend to be more labor intensive and expensive to manufacture compared to modern materials. However modern TPS materials often lack the flight history of the older materials (an important consideration for a risk adverse designer).
Based upon Allen and Eggers discovery, maximum aeroshell bluntness (maximum drag) yields minimum TPS mass. Maximum bluntness (minimum ballistic coefficient) also yields a minimal terminal velocity at maximum altitude (very important for Mars EDL but detrimental for military RVs). However there is an upper limit to bluntness imposed by aerodynamic stability considerations based upon shock wave detachment. A shock wave will remain attached to the tip of a sharp cone if the cone's half-angle is below a critical value. This critical half-angle can be estimated using perfect gas theory (this specific aerodynamic instability occurs below hypersonic speeds). For a nitrogen atmosphere (Earth or Titan), the maximum allowed half-angle is approximately 60°. For a carbon dioxide atmosphere (Mars or Venus), the maximum allowed half-angle is approximately 70°. After shock wave detachment, an entry vehicle must carry significantly more shocklayer gas around the leading edge stagnation point (the subsonic cap). Consequently, the aerodynamic center moves upstream thus causing aerodynamic instability. It is incorrect to reapply an aeroshell design intended for Titan entry (Huygens probe in a nitrogen atmosphere) for Mars entry (Beagle-2 in a carbon dioxide atmosphere). After being abandoned, the Soviet Mars lander program achieved no successful landings (no useful data returned) after multiple attempts. The Soviet Mars landers were based upon a 60° half-angle aeroshell design. In the early 1960s, it was incorrectly believed the Martian atmosphere was mostly nitrogen, (actual Martian atmospheric mole fractions are carbon dioxide 0.9550, nitrogen 0.0270 and argon 0.0160). The Soviet aeroshells were probably(?) based upon an incorrect Martian atmospheric model and then not revised when new data became available.
A 45 degree half-angle sphere-cone is typically used for atmospheric probes (surface landing not intended) even though TPS mass is not minimized. The rationale for a 45° half-angle is either aerodynamic stability from entry-to-impact (the heat shield is not jettisoned) or a short-and-sharp heat pulse followed by prompt heat shield jettison. A 45° sphere-cone design was used with the DS/2 Mars landers and Pioneer Venus Probes.
History's most difficult atmospheric entry
Diagram of Galileo atmospheric entry probe instruments and subsystems. (click image to enlarge)
The highest speed controlled entry so far achieved was by the Galileo Probe. The Galileo Probe was a 45° sphere-cone that entered Jupiter's atmosphere at 47.4 km/s (atmosphere relative speed at 450 km above the 1 bar reference altitude). The peak deceleration experienced was 230 Gs. Peak stagnation point pressure before aeroshell jettison was 9 bars. The peak shock layer temperature was approximately 16000 K (the solar photosphere is merely 5800 K). Approximately 26% of the Galileo Probe's original entry mass of 338.93 kg was vaporized during the 70 second heat pulse. Total blocked heat flux peaked at approximately 15000 W/cm². By way of comparison, the peak total heat flux experienced by the Mars Pathfinder aeroshell was 106 W/cm² (highest experienced by a successful Mars lander). The Apollo-4 (AS-501) command module which reentered the Earth's atmosphere at a velocity of 10.77 km/s (atmosphere relative speed at 121.9 km altitude) experienced a peak total heat flux of 497 W/cm².
Galileo probe heat shield profile before and after entry. (click image to enlarge)
Conservative design was used in creating the Galileo Probe. Due to the extreme state of the Galileo Probe's entry conditions, the radiative heat flux and turbulence of the shock layer along with the TPS material response were barely understood. Carbon Phenolic was used for the Galileo Probe TPS. Carbon phenolic was earlier used for the Pioneer Venus Probes which were the design ancestors to the Galileo Probe. The Galileo Probe experienced far greater TPS recession near the base of its frustum than expected. Despite a factor of two safety-factor in TPS thickness, the Galileo Probe's heatshield almost failed. The precise mechanism for this higher TPS recession is still unknown and currently beyond definitive theoretical analysis.
After successfully completing its mission, the Galileo Probe continued descending into Jupiter's atmosphere where the ambient temperature grew with greater depth due to isentropic compression. In the unfathomable depths of Jupiter's atmosphere, the surrounding temperature became so hot that the entire probe including its jettisoned heat shield vaporized into monatomic gas.
Notable atmospheric entry mishaps
Not all atmospheric re-entries have been successful and some have led to significant disasters.
• Vostok 1 – The service module failed to detach for 10 minutes, but fortunately, Cosmonaut Yuri Gagarin survived.
Genesis entry vehicle after "augering-in"
• Mercury 6 – Instrument readings show that the heat shield and landing bag were not locked. The decision was made to leave the retrorocket pack in position during reentry. Astronaut John Glenn survived. The instrument readings were later found to be erroneous.
• Voskhod 2 – The service module failed to detach for some time, but the crew survived.
• Soyuz 1 – Different accounts exist. Either the attitude control system failed while still in orbit and/or parachutes got entangled during the landing sequence (entry, descent and landing (EDL) failure). Cosmonaut Vladimir Mikhailovich Komarov died.
• Soyuz 5 – The service module failed to detach, but crew survived.
• Soyuz 11 – The crew perished due to early depressurization.
• Mars Polar Lander (MPL) – Failed during EDL. The failure was believed to be the consequence of a software error. The precise cause is unknown due to lack of real time telemetry.
• Space Shuttle Columbia – The failure of an RCC tile on a wing leading edge led to breakup of the orbit vehicle at hypersonic speed resulting in the loss of all seven crew members. This is probably the most infamous incident.
• Genesis – The parachute failed to deploy due to a G-switch being installed backwards (a similar error delayed parachute deployment for the Galileo Probe). Consequently, the Genesis entry vehicle augered into the desert floor. The payload was damaged but it was later claimed that some scientific data was recoverable.
Uncontrolled reentry
More than 100 metric tons of man-made objects reenter in an uncontrolled fashion each year. The vast majority burn up before reaching earth's surface. On average, about one cataloged object reenters per day. Approximately a quarter of all objects are of U.S. origin. Due to the Earth's surface being primarily water, most objects that survive reentry land in one of the world's oceans.
In 1978, Cosmos 954 reentered uncontrolled and crashed near Great Slave Lake in the Northwest Territories of Canada. Cosmos 954 was nuclear powered, using a nuclear fission reactor, and spread radioactive debris across northern Canada.
In 1979, Skylab reentered uncontrolled and crashed into Western Australia, killing one cow and damaging several buildings. Australia issued a fine for littering to the United States, but the fine was never settled.[15]
Deorbit disposal
In 2001, the Russian Mir space station was deliberately de-orbited, and broke apart during atmospheric re-entry. Mir entered the Earth's atmosphere on March 23, 2001, near Nadi, Fiji, and fell into the South Pacific Ocean.
Research into atmospheric entry
Mk-12A RV retrofitted with hafnium diboride strakes for a NASA research project (Sharp-B2)
As with many technologies, aerospace technological information can be dual use, i.e. aerospace technology can be used for both civilian or military purpose. Atmospheric entry technology owes its origins to the development of ballistic missiles during the Cold War. Given the enormous expense required in developing this technology, it is doubtful it could have appeared as quickly as it did without the military incentive. Mankind's survival beyond its planet of origin could be dependent upon atmospheric entry technology. It is ironic that the same technology enabling destructive nuclear-tipped missiles also enables this exploration and development of outer space. Aerospace technology is needed for civilian space exploration, yet certain aspects are and will remain restricted to impede military proliferation of the technology. This basic dilemma is present throughout the literature on atmospheric entry. There is a glass wall between pedagogical and practical information. For example, in the text books referenced in this article, a topic thread will proceed as long as the information is nonspecific but almost always stops at the point of practical application. To go beyond pedagogical information, one must search the technical literature (NACA/NASA Technical Reports, declassified technical reports and peer reviewed archive literature). Declassified technical reports are a frustrating information source since many of the reports were destroyed prior to going through the legally required declassification process. It is almost always true that significant documents referred to in declassified technical reports no longer exist (technical information costing many millions of dollars has simply vanished).
When the reentry vehicle is a guided missile, intended to cause a destructive effect at the end of its reentry path, it is called a warhead. This may be a nuclear weapon, or, when the reentry vehicle is a precision-guided munition, an inert filler such as concrete or steel. Inert fillers in reentry vehicles have military significance because the kinetic energy of a reentry vehicle is so great that adding chemical explosives to the warhead would only contribute insignificantly to the energy release.
The part of the missile that carries a warhead may also carry multiple warheads. A multiple warhead carrier assembly is called a bus. A multiple reentry vehicle (MRV) carrier releases all its warheads at the same time, but causes them to spread apart as they reenter, perhaps with a spring or a small explosive charge.
If the bus releases the warheads at different times in its flight, perhaps with a mechanical positioner on the warhead, or rocket controls that reorient the bus, each warhead is part of a set called a multiple independently targeted reentry vehicle (MIRV). Once released by the bus, the basic MIRV warheads follow strictly ballistic trajectories.
Individual reentry vehicles that use rockets, shifting center of mass, or aerodynamic control to alter their course from the purely ballistic course belong to the class of maneuverable reentry vehicles (MARV).
MIRV and MARV systems present especially difficult problems in arms control, because while a reconnaissance satellite can verify a land-based ICBM is in its silo, the sensors cannot look inside the aerodynamic shroud around the nose of the missile and see how many reentry vehicles, of what type, are inside.
A MIRV or MARV capable bus may also release penetration aids to make it more difficult to track or intercept the real warheads. Penetration aids are a highly classified technology, and the criteria by which a sensor can discriminate a penetration aid from a warhead are usually among the most highly classified parts of a missile system. Nevertheless, it is known that inflatable balloons covered with a thin metal film are used as one kind of radar reflector.
Since the balloon will have more air resistance than a solid warhead, it will soon reveal itself by taking a different trajectory. There may be more solid penetration aids that are made up of angles intended to cause maximum radar reflection, the antithesis of a stealth aircraft or weapon. Other techniques may exist, such as active radar transmitters that can overload or deceive missile defense radar.
1. Allen, H. Julian and Eggers, Jr., A. J., "A Study of the Motion and Aerodynamic Heating of Ballistic Missiles Entering the Earth's Atmosphere at High Supersonic Speeds," NACA Report 1381, (1958).
2. Fay, J. A. and Riddell, F. R., "Theory of Stagnation Point Heat Transfer in Dissociated Air," Journal of the Aeronautical Sciences, Vol. 25, No. 2, page 73, February 1958 (see "Fay-Riddell equation" entry in Glossary of atmospheric reentry).
4. Regan, Frank J. and Anadakrishnan, Satya M., "Dynamics of Atmospheric Re-Entry," AIAA Education Series, American Institute of Aeronautics and Astronautics, Inc., New York, ISBN 1-56347-048-9, (1993).
5. [1]Ames Research Staff, "Equations, Tables, and Charts for Compressible Flow," NACA Report 1135, (1953).]
6. Lighthill, M.J., "Dynamics of a Dissociating Gas. Part I. Equilibrium Flow," Journal of Fluid Mechanics, vol. 2, pt. 1. p. 1 (1957).
7. Freeman, N.C., "Non-equilibrium Flow of an Ideal Dissociating Gas." Journal of Fluid Mechanics, vol. 4, pt. 4, p. 407 (1958).
8. Parker, John and C. Michael Hogan, "Techniques for Wind Tunnel assessment of Ablative Materials," NASA Ames Research Center, Technical Publication, August, 1965.
9. Tran, Huy K, et al., "Qualification of the forebody heatshield of the Stardust's Sample Return Capsule," AIAA, Thermophysics Conference, 32nd, Atlanta, GA; 23-25 June 1997.
10. Tran, Huy K., et al., "Silicone impregnated reusable ceramic ablators for Mars follow-on missions," AIAA-1996-1819, Thermophysics Conference, 31st, New Orleans, LA, June 17-20, 1996.
11. Hogan, C. Michael, Parker, John and Winkler, Ernest, of NASA Ames Research Center, "An Analytical Method for Obtaining the Thermogravimetric Kinetics of Char-forming Ablative Materials from Thermogravimetric Measurements", AIAA/ASME Seventh Structures and Materials Conference, April, 1966
14. Pavlosky, James E., St. Leger, Leslie G., "Apollo Experience Report - Thermal Protection Subsystem," NASA TN D-7564, (1974).
15. Australians Take Mir Deorbit Risks in Stride, |
ac18d7489d2764c6 | I would like to run some simple simulations of scattering of wavepackets off of simple potentials in one dimension.
Are there simple ways to numerically solve the one-dimensional TDSE for a single particle? I know that, in general, trying to use naïve approaches to integrate partial differential equations can quickly end in disaster. I am therefore looking for algorithms which
• are numerically stable,
• are simple to implement, or have easily accessible code-library implementations,
• run reasonably fast, and hopefully
• are relatively simple to understand.
I would also like to steer relatively clear of spectral methods, and particularly of methods which are little more than solving the time-independent Schrödinger equation as usual. However, I would be interested in pseudo-spectral methods which use B-splines or whatnot. If the method can take a time-dependent potential then that's definitely a bonus.
Of course, any such method will always have a number of disadvantages, so I would like to hear about those. When does it not work? What are common pitfalls? Which ways can it be pushed, and which ways can it not?
The Schroedinger equation is effectively a reaction-diffusion equation $$ i\frac{\partial\psi}{\partial t}=-\nabla^2\psi+V\psi\tag{1} $$ (all constants are 1). When it comes to any partial differential equation, there's two ways to solve it:
1. Implicit method (adv: large time steps & unconditionally stable, disadv: requires matrix solver that can give bad data)
2. Explicit method (adv: easy to implement, disadv: requires small timesteps for stability)
For parabolic equations (linear in $t$ and 2nd order in $x$), the implicit method is often the better choice. The reason is the condition for stability for the explicit method requires $dt\propto dx^2$, which will be very small. You can avoid this issue by using the implicit method, which has no such limitation on the time-step (though in practice you don't normally make it insanely large because you can lose some of the physics). What I describe next is the Crank-Nicolson method, a common second order accurate (space & time) implicit scheme.
In order to computationally solve a PDE, you need to discretize it (make the variables fit onto a grid). The most straight-forward is a rectangular, Cartesian grid. Here on, $n$ represents the time index (and is always a super-script) and $j$ the position index (always a subscript). By employing a Taylor expansion for the position-dependent variable, Equation (1) becomes $$ i\frac{\psi^{n+1}_j-\psi_j}{dt} = -\frac12\left(\frac{\psi_{j+1}^{n+1}-2\psi_j^{n+1}+\psi_{j-1}^{n+1}}{dx^2}+\frac{\psi_{j+1}^{n}-2\psi_j^{n}+\psi_{j-1}^{n}}{dx^2}\right) \\ +\frac12\left(V_j\psi_j^{n+1} +V_j\psi_j^n\right) $$ Where we have assumed that $V=V(x)$. What happens next is a grouping of like spatial and temporal indices (you may want to double check the math): $$ \frac12\frac{dt}{dx^2}\psi_{j+1}^{n+1}+\left(i-\frac{dt}{dx^2}-\frac12V_j\right)\psi_j^{n+1}+\frac12\frac{dt}{dx^2}\psi_{j-1}^{n+1}=\\ i\psi_j^n-\frac12\frac{dt}{dx^2}\left(\psi_{j+1}^n-2\psi_j^n+\psi_{j-1}^n\right)+\frac12V_j\psi_j^n\tag{2} $$ This equation has the form $$ \left(\begin{array}{ccccc}A_0 & A_- & 0 & 0 &\cdots\\ A_+ & A_0 & A_- & 0 &\cdots\\ 0 & A_+ & A_0 & A_- &\cdots \\ \vdots & \vdots & \vdots & \ddots & \vdots\end{array}\right)\left(\begin{array}{c}\psi_0^{n+1} \\ \psi_1^{n+1}\\ \vdots \\ \psi_{J-1}^{n+1}\end{array}\right)=\left(\begin{array}{c}\psi_0^{n} \\ \psi_1^{n}\\ \vdots \\ \psi_{J-1}^{n}\end{array}\right) $$ Which is called a tri-diagonal matrix and has a known solution (plus working examples, including one written by me!). The explicit method scratches out the entire left side (or should I say top line?) of Equation (2) except for the $i\psi_j^{n+1}$ term.
The biggest issue that I have found with implicit methods is that they are strongly dependent on the boundary conditions. If you have poorly defined/implemented boundary conditions, you can get spurious oscillations in your cells that can lead to bad results (see my SciComp post on a similar topic). This leads to actually having 1st order accuracy in space, rather than 2nd that your scheme ought to give.
Implicit methods are also supposedly difficult to parallelize, but I have only used them for 1D heat equations and not needed parallel support, so I can neither verify nor deny the claim.
I am also unsure how the complex nature of the wave function will affect the calculations. The work that I have done uses the Euler fluid dynamic equations, and are thus entirely real with non-negative magnitudes.
Time-dependent potential
If you have an analytic time-dependent potential (e.g. $V\propto \cos(\omega t)$), then you'd simply use the current time, $t$, for the $V_j$ on the RHS of (2) and the future time, $t+dt$, on the LHS. I don't believe that this would create any problems, but I have not tested this so I cannot verify or deny this aspect too.
There are some interesting alternatives to the Crank-Nicolson method as well. The first up is the so-called "super-time-stepping" method. In this explicit method, you take the time step ($dt\propto dx^2$) and use the roots of Chebyshev polynomials to get an optimized set of time-steps that quickly sum to $dt$ faster than doing $dt/N$ steps $N$ times (effectively you get $\Delta T=N^2dt$ so that each step $N$ advances you $Ndt$ in time). (I employ this method in my research because you have a well-defined "flux" from one cell to another that is used for merging data from one processor to another, using the Crank-Nicolson scheme I was unable to do this).
EDIT One thing to note is that this method is first order accurate in time, but if you use a Runge-Kutta 2 method in conjunction, it will give you a 2nd order accurate scheme in time.
The other is called an alternating-direction explicit. This method requires you to have known and well-defined boundary conditions. It then proceeds to solve the equation by using the boundary directly in the computation (no need to apply it after each step). What happens in this method is you solve the PDE twice, once in an upward sweep and once in a downward sweep. The upward sweep uses $$ \frac{\partial^2\psi}{\partial x^2}\approx\frac{\psi_{j-1}^{n+1}-\psi_j^{n+1}-\psi_j^n+\psi_{j+1}^n}{dx^2} $$ while the downward sweep uses $$ \frac{\partial^2\psi}{\partial x^2}\approx\frac{\psi_{j+1}^{n+1}-\psi_j^{n+1}-\psi_j^n+\psi_{j-1}^n}{dx^2} $$ for the diffusion equation while the other terms would remain the same. The time-step $n+1$ is then solved by averaging the two directional sweeps.
• 1
$\begingroup$ Great answer, only complaint is that you beat me to it! $\endgroup$ – Kyle Feb 8 '14 at 18:20
• $\begingroup$ @ChrisWhite: I was thinking about how that could be done earlier this morning and the only thing I came up with is be doing it once for $\mathbb R$ and once for $\mathbb I$. I'll take a look at that paper (and more importantly the free code they give out) and see how they suggest to do it. $\endgroup$ – Kyle Kanos Feb 8 '14 at 19:00
• $\begingroup$ @ChrisWhite Maybe the sneakiness is for calculating eigenfunctions, which I have seen calculated by imaginary timestepping: you arrange the step direction so that the lowest energy eigenfunction has the least negative value of $-h\,\nu$ and thus the slowest decay. On iterating on a random input, very swiftly only the shape of the lowest energy eigenfunction is left. Then you subtract this from the random input and do the process again: now the next lowest energy eigenfunction is the dominant one. And so on. It sounds a bit dodgy (especially getting higher $-h\,\nu$ eigenfuncs) but it works! $\endgroup$ – WetSavannaAnimal aka Rod Vance Feb 9 '14 at 12:39
• 1
$\begingroup$ @DavidKetcheson: A reaction-diffusion equation takes the form $\partial_tu=D\partial_x^2u+R(u)$. In the case of the Schrodinger equation, $R(u)=Vu$; may I then ask how that it's not a RD-type equation? And, curiously, the Schrodinger equation actually appears in the reaction-diffusion wiki article I referenced. This equivocation I made also appears in many published journals and texts (go ahead and search it). Perhaps it would have been better for me to advise using standard libraries (as is the common MO here), such as PETSc, deal.ii, or pyCLAW? $\endgroup$ – Kyle Kanos Feb 23 '14 at 18:09
• 1
$\begingroup$ @KyleKanos: Your post is good. In fact, in the article posted by DavidKetcheson, Crank-Nicolson is advocated by the first reference. The comparison to reaction-diffusion is fine; as you note, the comparison to reaction-diffusion does appear in many published sources. I think DavidKetcheson was looking for something like "dispersive wave equation" mentioned earlier. $\endgroup$ – Geoff Oxberry Feb 23 '14 at 20:44
In the early 90s we were looking for a method to solve the TDSE fast enough to do animations in real time on a PC and came across a surprisingly simple, stable, explicit method described by PB Visscher in Computers in Physics: "A fast explicit algorithm for the time-dependent Schrödinger equation". Visscher notes that if you split the wavefunction into real and imaginary parts, $\psi=R+iI$, the SE becomes the system:
\begin{eqnarray}\frac{dR}{dt}&=&HI \\ \frac{dI}{dt}&=&-HR \\ H&=&-\frac{1}{2m}\nabla^2+V\end{eqnarray}
If you then compute $R$ and $I$ at staggered times ($R$ at $0,\Delta t,2\Delta t,...$ and $I$ at $0.5\Delta t, 1.5\Delta t,...)$, you get the discretization:
$$R(t+\frac{1}{2} \Delta t)=R(t-\frac{1}{2} \Delta t)+\Delta t HI(t)$$
$$I(t+\frac{1}{2} \Delta t)=I(t-\frac{1}{2} \Delta t)-\Delta t HR(t)$$
with $$\nabla^2\psi(r,t)=\frac{\psi(r+\Delta r,t)-2\psi(r,t)+\psi(r-\Delta r,t)}{\Delta r^2}$$ (standard three-point Laplacian).
This is explicit, very fast to compute, and second-order accurate in $\Delta t$.
Defining the probability density as
$$P(x,t)=R^2(x,t)+I(x,t+\frac{1}{2} \Delta t)I(x,t-\frac{1}{2} \Delta t)$$ at integer time steps and,
$$P(x,t)=R(x,t+\frac{1}{2} \Delta t)R(x,t-\frac{1}{2} \Delta t)+I^2(x,t)$$ at half-integer time steps
makes the algorithm unitary, thus conserving probability.
With enough code optimization, we were able to get very nice animations computed in real-time on 80486 machines. Students could "draw" any potential, choose a total energy, and watch the time-evolution of a gaussian packet.
• $\begingroup$ That is a very neat trick for solving the real & imaginary components! Note also that you can get large, centered equations by using $$ ... $$. I've taken the liberty of doing this for you, I hope you don't mind! $\endgroup$ – Kyle Kanos Feb 10 '14 at 18:42
• $\begingroup$ We were delighted to find the algorithm - it was easy to program and ran fast. The hardest part was getting the initial conditions right, R at t=0 and I at 0.5dt... I don't mind the edit, I was happy to get equations at all. $\endgroup$ – Wally Feb 10 '14 at 19:05
• 1
$\begingroup$ @user40172 We were doing the same thing for waveguides at the about the same time, and we settled on the BPM described in my answer. The reason was that at the time we could run the FFTs separately from the main CPU using a DSP board. We thought we were oh so clever, but I must say coming up with essentially a hardware solution to a software problem looks pretty naff in 2014 though! The latest version of Visual Studio C++ automatically vectorises code over CPUs and it does a beautiful job with the FFT. $\endgroup$ – WetSavannaAnimal aka Rod Vance Feb 11 '14 at 0:37
• 1
$\begingroup$ @user40172 How did you get the initial conditions at $0.5dt$ finally? Just propagating solution to that time using another method? $\endgroup$ – Ruslan Feb 11 '14 at 18:56
• 1
$\begingroup$ @Rusian Since we were doing scattering, we used a standard free-particle Gaussian wave packet but made sure to start it "far enough" away from any region where the potential was non-zero. See, for example: demonstrations.wolfram.com/EvolutionOfAGaussianWavePacket $\endgroup$ – Wally Feb 11 '14 at 19:20
Kyle Kanos's answer looks to be very full, but I thought I'd add my own experience. The split-step Fourier method (SSFM) is extremely easy to get running and fiddle with; you can prototype it in a few lines of Mathematica and it is, extremely stable numerically. It involves imparting only unitary operators on your dataset, so it automatically conserves probability / power (the latter if you're solving Maxwell's equations with it, which is where my experience lies). For a one-dimensional Schrödinger equation (i.e. $x$ and $t$ variation only), it is extremely fast even as Mathematica code. And if you need to speed it up, you really only need a good FFT code in your target language (my experience lies with C++).
What you'd be doing is a disguised version of the Beam Propagation Method for optical propagation through a waveguide of varying cross section (analogous to time varying potentials), so it would be helpful to look this up too.
The way I look at the SSFM/BPM is as follows. Its grounding is the Trotter product formula of Lie theory:
$$\tag{1}\lim\limits_{m\to\infty}\left(\exp\left(\mathcal{D}\,\frac{t}{m}\right)\,\exp\left(\mathcal{V}\,\frac{t}{m}\right)\right)^m = \exp((\mathcal{D+V}) t)$$
which is sometimes called the operator splitting equation in this context. Your dataset is an $x-y$ or $x-y-z$ discretised grid of complex values representing $\psi(x,y,z)$ at a given time $t$. So you imagine this (you don't have to do this; I'm still talking conceptually) whopping grid written as an $N$-element column vector $\Psi$ (for a $1024\times1024$ grid we have $N=1024^2=1\,048\,576$) and then your Schrödinger equation is of the form:
$$\tag{2}\mathrm{d}_t \Psi = K\Psi = (\mathcal{D+V}(t)) \Psi$$
where $K = \mathcal{D+V}$ is an $N\times N$ skew-Hermitian matrix, an element of $\mathfrak{u}(N)$, and $\Psi$ is going to be mapped with increasing time by an element of the one parameter group $\exp(K\,t)$. (I've sucked the $i\hbar$ factor into the $K = \mathcal{D+V}$ on the RHS so I can more readily talk in Lie theoretic terms). Given the size of $N$, the operators' natural habitat $\mathfrak{U}(N)$ is a thoroughly colossal Lie group so PHEW! yes I am still talking in wholly theoretical terms!. Now, what does $\mathcal{D+V}$ look like? Still imagining for now, it could be thought of as a finite difference version of $i\,\hbar\,\nabla^2/(2\,m) - i\hbar^{-1}V_0 + i\hbar^{-1}(V_0-V(x,y,z,t_0))$, where $V_0$ is some convenient "mean" potential for the problem at hand.
We let:
$$\tag{3}\begin{array}{lcl}\mathcal{D} &=& i\frac{\hbar}{2\,m} \nabla^2 - i\hbar^{-1}V_0\\ \mathcal{V}&=&i\hbar^{-1}(V_0-V(x,y,z,t))\end{array}$$
Why I have split them up like this will become clear below.
The point about $\mathcal{D}$ is that it can be worked out analytically for a plane wave: it is a simple multiplication operator in momentum co-ordinates. So, to work out $\Psi\mapsto\exp(\Delta t\,\mathcal{D}) \Psi$, here are the first three steps of a SSFM/BPM cycle:
1. Impart FFT to dataset $\Psi$ to transform it into a set $\tilde{\Psi}$ of superposition weights of plane waves: now the grid co-ordinates have been changed from $x,\,y,\,z$ to $k_x,\,k_y,\,k_z$;
2. Impart $\tilde{\Psi}\mapsto\exp(\Delta t\,\mathcal{D}) \tilde{\Psi}$ by simply multiplying each point on the grid by $\exp(i\,\Delta t (V_0-k_x^2+k_y^2+k_z^2)/\hbar)$;
3. Impart inverse FFT to map our grid back to $\exp(\Delta t\,\mathcal{D}) \Psi$
.Now we're back in position domain. This is the better domain to impart the operator $\mathcal{V}$ of course: here $\mathcal{V}$ is a simple multiplication operator. So here is your last step of your algorithmic cycle:
4. Impart the operator $\Psi\mapsto\exp(\Delta t\,\mathcal{V}) \Psi$ by simply multiplying each point on the grid by the phase factor $\exp(i\,\Delta t\,(V_0-V(x,y,z,t))/\hbar)$
....and then you begin your next $\Delta t$ step and cycle over and over. Clearly it is very easy to put time-varying potentials $V(x,y,z,t)$ into the code.
So you see you simply choose $\Delta t$ small enough that the Trotter formula (1) kicks in: you're simply approximating the action of the operator $\exp(\mathcal{D+V}\,\Delta t)\approx\exp(\mathcal{D}\,\Delta t)\,\exp(\mathcal{V}\,\Delta t)$ and you flit back and forth with your FFT between position and momentum co-ordinates, i.e. the domains where $\mathcal{V}$ and $\mathcal{D}$ are simple multiplication operators.
Notice that you are only ever imparting, even in the discretised world, unitary operators: FFTs and pure phase factors.
One point you do need to be careful of is that as your $\Delta t$ becomes small, you must make sure that the spatial grid spacing shrinks as well. Otherwise, suppose the spatial grid spacing is $\Delta x$. Then the physical meaning of the one discrete step is that the diffraction effects are travelling at a velocity $\Delta x/\Delta t$; when simulating Maxwell's equations and waveguides, you need to make sure that this velocity is much smaller than $c$. I daresay like limits apply to the Schrödinger equation: I don't have direct experience here but it does sound fun and maybe you could post your results sometime!
A second "experience" point with this kind of thing - I'd be almost willing to bet this is how you'll wind up following your ideas. We often have ideas that we want to do simple and quick and dirty simulations but it never quite works out that way! I'd begin with the SSFM as I've described above as it is very easy to get running and you'll quickly see whether or not its results are physical. Later on you can use your, say Mathematica SSFM code check the results of more sophisticated code you might end up building, say, a Crank Nicolson code along the lines of Kyle Kanos's answer.
Error Bounds
The Dynkin formula realisation of the Baker-Campbell-Hausdorff Theorem:
$$\exp(\mathcal{D}\Delta t)\,\exp(\mathcal{V})\Delta t) = \exp\left((\mathcal{D}+\mathcal{V})\Delta t + \frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2 + \cdots\right)$$ converging for some $\Delta t>0$ shows that the method is accurate to second order and can show that:
$$\exp(\mathcal{D}\Delta t)\,\exp(\mathcal{V})\Delta t)\,\exp\left(-\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2\right) = \exp\left((\mathcal{D}+\mathcal{V})\Delta t + \mathcal{O}(\Delta t^3)\right)$$
You can, in theory, therefore use the term $\exp(\mathcal{V})\Delta t)\,\exp\left(-\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2\right)$ to estimate the error and set your $\Delta t$ accordingly. This is not as easy as it looks and in practice bounds end up being instead rough estimates of the error. The problem is that:
$$\frac{\Delta t^2}{2}[\mathcal{D},\,\mathcal{V}] = -\frac{i\,\Delta t^2}{2\,m}\,\left(\partial_x^2 V(x,\,t) + 2 \partial_x V(x,\,t)\,\partial_x\right)$$
and there are no readily transformed to co-ordinates wherein $[\mathcal{D},\,\mathcal{V}]$ is a simple multiplication operator. So you have to be content with $\exp\left(-\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2\right) \approx e^{-i\,\varphi\,\Delta t^2}\left(\mathrm{id} -\left(\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,-i\,\varphi(t)\right)\,\Delta t^2\right)$ and use this to estimate your error, by working out $\left(\mathrm{id} -\left(\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,-i\,\varphi(t)\right)\,\Delta t^2\right) \,\psi$ for your currently evolving solution $\psi(x,\,t)$ and using this to set your $\Delta t$ on-the-fly after each cycle of the algorithm. You can of course make these ideas the basis for an adaptive stepsize controller for your simulation. Here $\varphi$ is a global phase pulled out of the dataset to minimise the norm of $\left(\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,-i\,\varphi(t)\right)\,\Delta t^2$; you can of course often throw such a global phase out: depending on what you're doing with the simulation results often we're not bothered by a constant phase global $\exp\left(\int \varphi\,\mathrm{d}t\right)$.
A relevant paper about errors in the SSFM/BPM is:
Lars Thylén. "The Beam Propagation Method: An Analysis of its Applicability", Optical and Quantum Electronics 15 (1983) pp433-439.
Lars Thylén thinks about the errors in non-Lie theoretic terms (Lie groups are my bent, so I like to look for interpretations of them) but his ideas are essentially the same as the above.
• 1
$\begingroup$ Rod, you are probably aware that you can do better if you use the so-called split-operator approximation, where $\exp[\Delta t({\cal D} + {\cal V})] \approx \exp[\Delta t {\cal V}/2] \exp[\Delta t {\cal D}] \exp[\Delta t {\cal V}/2]$. In fact you can do some further splitting to carry the error to higher $\Delta t$ powers. See for instance Bandrauk and Shen, Chem. Phys. Lett. 176, 428 (1991). Obviously your kinetic term cannot depend on the coordinates, that is, it doesn't work nicely in curvilinear coordinates. $\endgroup$ – perplexity Feb 17 '14 at 11:01
• 1
$\begingroup$ Otherwise, this split-operator thing coupled to the FFT evaluation of the kinetic energy operator is one of the standard procedures to solve the TDSE on a grid-based representation in Molecular Physics. $\endgroup$ – perplexity Feb 17 '14 at 11:05
• $\begingroup$ @perplexity Many thanks. It's good to know what different fields use. The 1991 date on your reference is interesting: I was always pretty sure the split step idea came out of waveguide simulation in the late 1970s - so maybe I'm wrong. $\endgroup$ – WetSavannaAnimal aka Rod Vance Feb 17 '14 at 12:12
• 1
$\begingroup$ You are not wrong at all. That was the inspiration indeed. The first work translating these ideas to QM that I am aware of is Feit, Fleck and Steiger, J. Comput. Phys. 47, 412 (1982) where, if I recall correctly, they essentially use the same tricks with the advantage that the operator here is unitary by construction (unlike in classical waves). The FFT-grid based approach to these type of simulations was first proposed by Ronnie Kosloff, I believe. He has a very nice review about this subject on his web page. $\endgroup$ – perplexity Feb 17 '14 at 16:27
• $\begingroup$ Another good reference in my field is David Tannor's book on Quantum Mechanics: A time-dependent perspective. Cheers. $\endgroup$ – perplexity Feb 17 '14 at 16:28
I can recommend using the finite-difference time-domain (FDTD) method. I even wrote a tutorial some time back that should answer most of your questions:
J. R. Nagel, "A review and application of the finite-difference time-domain algorithm applied to the Schrödinger equation," ACES Journal, Vol. 24, No. 1, February 2009
I have some Matlab codes that run nicely for 1D systems. If you have experience with FDTD doing electromagnetics, it works great for quantum mechanics as well. I can post my codes if you're interested.
Basically, it just operates on the wavefunctions directly by splitting the derivatives up into finite differences. It is kind of similar to the Crank-Nicholson scheme, but not exactly. If you are familiar with FDTD from electromagnetic wave theory, then FDTD will be very intuitive when solving the Schrodinger equation.
The most straightforward finite difference method is fast and easy to understand, but is not unitary in time - so probability is not conserved. Crank-Nicholson-Crout averages the forward and backward finite difference methods to produce a hybrid implicit/explicit method that is still pretty easy to understand and to implement and is unitary in time. This site explains the method well, provides pseudocode, and gives the relevant properties:
http://www.physics.utah.edu/~detar/phycs6730/handouts/crank_nicholson/crank_nicholson/ Note: There is a - sign missing from the LHS of equation one of this link, which propagates throughout the page.
Where does the nonunitarity come from?
In a nut shell, solving the TDSE comes down to figuring out how to deal with
$| \psi(x,t)\rangle = e^{-iHt}|\psi(x,0)\rangle$
which contains a differential operator in an exponential.
Applying a forward finite difference turns the differential operator into a tridiagonal matrix (converting the Reals to a grid) and the exponential into the first two terms of its Taylor series
$e^{-iHt}\approx1-iHt $
This discretization and linearization is what gives rise to the nonunitarity. (You can show that the tridiagonal matrix is not unitary by direct computation.) Combining the forward finite difference with the backward finite difference produces the approximation
$e^{-iHt}\approx \frac {1-\frac{1}{2} iHt} {1+\frac{1}{2} iHt} $
which, kindly, happens to be unitary (again you can show it by direct computation).
• $\begingroup$ Thanks for the quick response. Could you provide more details on both those methods? How do they work, and why? Where does the nonunitarity come from? $\endgroup$ – Emilio Pisanty Feb 7 '14 at 22:51
• $\begingroup$ I would be happy to provide more detail, but to avoid missing my target audience, it would be useful to know how much education and experience you've had in each of the following background fields: Calculus, Differential Equations, Linear Algebra, Quantum Mechanics, and Numerical Methods (specifically Finite Difference Methods). $\endgroup$ – Wally Feb 8 '14 at 14:26
• $\begingroup$ Please assume as much as you need from standard physics and math (though references to the more complicated parts would probably help). My numerical methods are a bit rusty, though. $\endgroup$ – Emilio Pisanty Feb 8 '14 at 16:10
• $\begingroup$ Are there any differences between this and Kyle Kanos's answer? I mean, it's not obvious how to implement your last equation - as you've written it involves inverting a full operator - are you simply saying that the CN method is simply, through the solution of its tridiagonal equation, working out $(1+\frac{i}{2}\,H\,t)^{-1}\,(1+\frac{i}{2}\,H\,t)\,\psi$? Or is there a subtlety that I've missed? Actually you last equation is a good rendering insofar that it makes unitarity explicit for CN, a fact which is unclear in many descriptions of CN. $\endgroup$ – WetSavannaAnimal aka Rod Vance Feb 9 '14 at 12:17
• $\begingroup$ No, it is the same algorithm as given by Kyle Kanos. I just wrote it this way to give a different way of looking at it. I hoped easier to conceptualize - whereas his is easier to implement. Yes, you are ultimately just solving a tridiagonal equation. There was an old (1967) paper in AJP that I couldn't find earlier that describes it very well: ergodic.ugr.es/cphys/lecciones/SCHROEDINGER/ajp.pdf They used CN to produce 8mm film loops of gaussian wave packets scattering off various potentials. You can still find those film loops in many university physics demo libraries. $\endgroup$ – Wally Feb 9 '14 at 20:48
A few answers and comments here conflate confusingly the TDSE with a wave equation; perhaps a semantics issue, to some extent. The TDSE is the quantized version of the classical non-relativistic hamiltonian $$H=\frac{p^2}{2m} + V(x)= E.$$ With the rules $$p \rightarrow i\hbar\partial_x,\ \ E\rightarrow i\hbar\partial_t, \ \ x\rightarrow x,$$ (as discussed in chapter 1 of d'Espagnat, Conceptual foundations of quantum mechanics, https://philpapers.org/rec/ESPCFO), it therefore reads $$\left[-\frac{\hbar^2}{2m}\partial_{xx} + V(x)\right]\psi = i\hbar\partial_t\psi,$$ so it is clearly a diffusion-like equation. If one used the relativistic energy, which contains a E$^2$ term, then a wave-like equation such as $$ \partial_{xx}\psi = \partial_{tt}\psi +\dots $$ would obtain (for V=0 for simplicity), such as the Pauli or Klein-Gordon equations. But that is, of course, a completely different matter.
Now, back to the TDSE, the obvious method is Crank-Nicolson as has been mentioned, because it is a small-time expansion that conserves the unitarity of the evolution (FTCS, e.g., does not). For the 1-space-D case, it can be treated as a matrix iteration, reading $$ \vec{\psi}^{n+1}=({\cal I} + \frac{i\tau}{2\hbar}\tilde{H})^{-1} ({\cal I} - \frac{i\tau}{2\hbar}\tilde{H})\vec{\psi}^{n} $$ with ${\cal I}$ the identity matrix and $$ H_{jk}=(\tilde{H})_{jk}=\frac{-\hbar^{2}}{2m}\left[\frac{\delta_{j+1,k}+\delta_{j-1,k}-2\delta_{jk}}{h^{2}}\right]+V_{j}\delta_{jk}. $$ (Details e.g. in Numerical methods for physics, http://algarcia.org/nummeth/nummeth.html, by A. L. Garcia). As most clearly seen in periodic boundary conditions, a space-localized $\psi$ spreads out in time: this is expected, because the initial localized $\psi$ is not an eigenstate of the stationary Schroedinger equation, but a superposition thereof. The (classical massive free particle) eigenstate with fixed momentum (for the non-relativistic kinetic operator) is simply $\psi_s=e^{ikx}/\sqrt{L}$, i.e. fully delocalized as per Heisenberg principle, with constant probability density 1/L everywhere (note that I am avoiding normalization issues with continuum states by having my particle live on a finite, periodically repeated line). Using C-N, the norm $$\int |\psi|^2 dx$$ is conserved, thanks to unitarity (this is not the case in other schemes, such as FTCS, e.g.). Incidentally, notice that starting from an energy espression such as $$cp=E$$ with $c$ fixed, you'd get $$ic\hbar\partial_x=i\hbar\partial_t$$ i.e. the advection equation, which has no dispersion (if integrated properly with Lax-Wendroff methods), and your wavepacket will not spread in time in that case. The quantum analog is the massless-particle Dirac equation.
Your Answer
|
9ee47a768a3e4689 | Quantum Mechanics, Randomness, and the Bible
By Dr. Christopher Plumberg
Single Page/Printer Friendly
Continued from Page One
QM has taught us a great deal about God's creation. It would take me a long time to detail all of the experimentally sound and well-documented portions of what is typically considered QM. However, let me briefly sketch some of the major highlights. Quantum mechanics tells us that, on sufficiently small scales, particles stop behaving like particles and start behaving more like waves. In fact, it's not possible to predict (in general) what the result of any given experiment will be; rather, all one can speak of with any accuracy is the probability of one outcome vs. another. These probabilities can be calculated directly from the particle's wavefunction, whose behavior is described by the Schrödinger equation. This has bothered many people because, as I noted above, some try to interpret this state of affairs as implying that reality is random. However, nothing of the sort is really going on here: the probabilities themselves are still things that we can calculate and compare with experiment. Indeed, if reality were truly random, experiments and physics would become impossible, because there would be no way of predicting even the probability of a particular experimental outcome based only on one's knowledge of the experimental set-up. Rather, it is much more likely that we are simply asking the wrong question, because we are fundamentally ignorant of how the microscopic world truly works. Quantum mechanics is strange, but it doesn't mean we have to abandon our knowledge of the world around us or what God says in His Word.
Quantum mechanics is currently the best explanation we have for all sorts of microscopic physical phenomena, from pair creation and annihilation, to our understanding of why chemistry works the way it does, to scattering experiments (when particles are bounced off of one another), to condensed matter physics (which studies the properties of matter on mesoscopic or "intermediate" scales). Quantum mechanics also fixes a whole host of problems that were present in classical mechanics at the end of the 19th century, in addition to helping us understand emission and absorption lines in atomic and molecular spectra, radioactive decays, and allowing us to develop all sorts of new technologies in medicine, science, and industry. Quantum mechanics is a well-tested idea which treats particles in terms of their wavefunctions, and is perfectly acceptable from a Christian standpoint.
There are, however, many conclusions which are drawn from quantum mechanics which are not Biblical. I have already mentioned some of them, such as the view that reality is random or observed-created. We know that God created the universe (Genesis 1:1) and that He causes it to operate according to fixed, predictable, and regular laws of nature (Genesis 1:14-19, Jeremiah 33:23-26, Hebrews 1:3). So clearly these counter-Biblical conclusions are unacceptable, if they are to be taken in an "ultimate" sense. Similarly, the multiverse theory (or, at least, certain versions of it) requires us to believe that the universe has always existed, which also clearly contradicts the teachings of God's Word, and the multiple-universes (or "many-worlds") interpretation of QM suggests that God continued creating after He rested on the seventh day, which seems unlikely. It's important to emphasize here, however, that none of these counter-Biblical conclusions follow necessarily from the data; they are extrapolations and inferences which are frequently based on data, but are often more fanciful than truly meaningful or observable. Moreover, the term "quantum" is frequently misused to make ridiculous ideas sound scientific. So, for instance, "quantum healing" is not only bad science, it's also very likely to be something of demonic origins. Thus, one should be very cautious of anything described as following from "quantum physics," in spite of the fact that QM itself is perfectly acceptable for the Christian to try to use and understand.
Whatever the strange laws of quantum mechanics teach us, they certainly do not invalidate what we read in God's Word, nor do they undermine our notion of truth. Simply because waves (and wavefunctions) appear to provide a better description of microscopic phenomena than does a purely particle-based theory does not mean that somehow certainty can no longer hold in our macroscopic world. And even though the outcomes of quantum mechanical measurements are difficult (or impossible) for us to predict, this in no way prevents the God who is sovereign over creation from ruling and remaining completely in control of His creation (Psalm 103:19), and knowing in advance the outcome of any particular measurement.
Finally, some Christians have tried to use quantum mechanics to make sense of how God acts in the world. It is not clear that God only interacts with the world through the physical laws that He has created; in fact, there seem to be some clear scenarios (such as the resurrection of Christ) where God interacts with His creation in decidedly non-physical ways. Attempting to explain miracles and other Biblical concepts (such as the human ability to choose between two alternatives, like evil and good) in terms of something fundamentally physical like quantum mechanics is probably unwise, and risks arguing for the truths of Scripture on the basis of fallible, changing human knowledge. Although quantum mechanics is an excellent description of the microscopic world and poses no problem for the Christian, it is important to remember that some things in Scripture simply are not reducible to scientific laws. In short, we should treat it for what it is: a valuable and powerful tool for enabling us to understand and steward God's creation responsibly but something we should be careful to use within the boundaries set by the Word of God.
TagsBiblical-Truth | Controversial-Issues | Current-Issues | Science-Creation
comments powered by Disqus
Published 5-25-2015 |
6018f431f012de53 | « first day (3151 days earlier) last day (307 days later) »
00:00 - 16:0017:00 - 00:00
12:00 AM
So what are all 4 numbers and does the equality hold?
got it
they are equal
thank you Ted!
You're welcome. Always remember to play with examples!
ok, thanks . Have to go, bye :)
12:06 AM
Hey chat!
heya @Lucas
how you're doing @Ted?
Quite well, thanks, and you?
hlo folk
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
12:12 AM
Heya @Eric.
How is the dimension of a simplicial complex defined?
@user193319: Where did you use $k$?
Not sure. I just wrote the problem verbatim.
Oh, I see.
how goes
12:19 AM
@TedShifrin life's giving me a break, so better now. thanks :)
uni is the easiest part of going to uni
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Sounds too easy for you.
@user193319: So can you show there are no $k$-simplices for $k>n$?
you mean I might be underestimating it?
No, @Lucas. I think the opposite.
12:23 AM
No, but I can try. So, the dimension has to do with the number $k$-simplices that can be fit into $X_n(\Bbb{Z})$?
well, thank you then. :)
@Lucas what’s the content of a math degree in Brazil
By dimension they mean the highest-dimensional simplex you can have.
@ÉricoMeloSilva hold on
Ah, okay. Thanks, I'll think about what you've said.
12:26 AM
@Erico, you can find it here
however, Unicamp's structure is designed to have less obligatory classes and give you more free time to do scientific research
iirc you must do at least 2 in a year after you get into the math course itself (the first 3 semesters are with the physics course so you can choose which one you like the most after this period)
cool cool
@Eric: So Korean or what for dinner tonight?
"period of time". lmao
non-native speakers suck. :p
maybe taiwanese porridge
@Lucas are you planning on trying to go to grad school?
my dream is to study at IMPA and Berkeley
12:33 AM
It takes a strong person to succeed at Berkeley.
so lift weights
yeah they kind of bury u w teaching
Oh, most places do that, @Eric. That wasn't my issue at all.
@LucasHenrique i decided not to apply to IMPA for various reasons but I was just at berkeley and met a few cool ppl
@TedShifrin i guess so, among my options it was by far the heaviest teaching load
Huge state school, so sure.
Princeton = spoiled brats.
12:34 AM
lots of people need lots of teachers
uh oh am i gonna be a spoiled brat
I think so.
@TedShifrin the lack of :P ending that sentence is unsettling
I was referring to undergrads, mostly, but, yeah.
I aim to unsettle.
12:37 AM
my impression of berkeley from visiting their grad open house was that it’s very easy to get lost there
Yeah, that was more the issue I was speaking to.
One has to be a very aggressive graduate student to make sure to get attention.
At UGA we pampered the grad students, by comparison.
seems to be a problem with large state schools...
And many of us on the faculty tried to make sure no one slipped through the cracks.
@TedShifrin that’s rlly good
12:51 AM
Anybody know if there's a question similar to this out there? or resources that may help https://math.stackexchange.com/q/3156156/445911
I checked around but couldn't find anything (among the many "recommended" posts) that quite fit.
Anyone who is familiar with Latex, can you tell me I get this?
When I do
$\frac{u_{max} - u_{min}}{x_{max} - x_{min}}$
Why does the numerator decide to be on the left instead of top?
Looks like you messed up somewhere, the first 'u' is not in math mode
No, you don't want \frac.
Maybe a copy/paste error? Sometimes copying things into a latex editor can mess it up.
When I removed the $, it started working...
12:56 AM
Weird. @TedShifrin why wouldn't you want \frac ? /curious
You want $a_b$, where $b$ has an overline over the expression.
@TedShifrin I think DCL wants the corrected version, where it is actually a fraction. Unless I'm mistaken
Presumably "can you tell me I get this?" should've been "can you tell me why I get this?"
Ah, I didn't try to psychoanalyze the English.
I just wanted the numerator to be on top of the denominator, instead of hanging around on the left side.
@DemCode: What are you trying to do?
Oh, OK. So you do want \frac.
12:59 AM
That should've read "why I get this". Typo!
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
It was just the fraction.
Interesting. I don't know why this isn't compiling.
Hm, weird. Are you editing it in stackexchange? Have a link to the question/answer?
1:03 AM
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
@DemCodeLines I still find it odd that the first u is not in math mode but the rest is... maybe if you figure out why that's occurring it may solve the problem
Yeah, that's not my problem, though.
Oh, I left out a {.
@abuchay: What is $A$?
How do you know the subgroup has to be free?
@ryan Yup, working on it
@TedShifrin sorry, $A$ is a proper subgroup of $\mathbb{Z}^n$.
So you need to prove it's free abelian.
1:09 AM
@DemCodeLines I'm not quite sure why it's not working for you. However, if you post the answer/question and put the link to it in here, someone will certainly propose an edit that fixes it for you.
Anyway, hope you get it resolved. I'm off now
I literally copied your text and pasted it, @DemCode, and that's what I got.
For mine, I put the subscripts in text mode, because I hate the math mode with text.
There was a \[ missing earlier somewhere in the document, but \] is in there later on. Adding that \[ fixed it (started working with $).
So one dollar sign was missing, basically.
@TedShifrin correct me if i am wrong; isn't $A$ free abelian group since it is a subgroup of $\mathbb{Z}^n$ and it can be spanned by ${e_i}_{i \in I}$.
It is a theorem, @abuchay, that a subgroup of a free abelian group must be free. It is non-obvious.
1:21 AM
@TedShifrin Alright, i can prove that. Thanks.
Heya chat.
sup @Fargle
Nothing much, how have you been?
mostly good, agonizing over grad decisions but that’s not a real problem. hbu?
Wow, it's a @Fargle.
1:26 AM
Just hanging around. Been reading mostly.
Actual literature?
mr. shifrin
Q: why is the isomorphism $T_p(T_pM)\cong T_pM$ natural
You're just asking about $T_x(V) \cong V$ for any vector space $V$.
You have a global chart.
Yes, but how is that iso natural
How is it not?
1:32 AM
Doesn't it depend on the basis you use?
Hell no.
@TedShifrin No, math. :P
Blah, @Fargle :P
I do read other things. Just haven't lately.
1:33 AM
the iso literally looks like (p, v) maps to v
what could be more natural
The proof I know of $T_pV \cong V$ for a vector space $V$ begins with "take a basis of $V$..."
One definition of natural would be that if you have an iso $\phi\colon V\cong W$ it induces an iso $T_xV \cong T_{\phi(x)}W$?
Let's actually look at the definition of natural.
pulls out Mac Lane
For me, I'd show $T_p(V) \cong T_0(V) = V$.
As Eric suggests, I don't see why you need any basis.
@Fargle pshhhh nerd
1:37 AM
I suppose it depends on which definition of tangent space you use.
Which one are you using?
I use whichever one I feel like.
Here I'll use equivalence classes of curves.
@ÉricoMeloSilva :(
me too tho
Is it not natural for any definition? I don't see why it would matter.
All definitions are equivalent, maybe one is easier?
1:38 AM
I think you are using a naïve definition of "natural," i.e., independent of choices. But there is a definition more like the one I said above.
I.e., it corresponds right under morphisms.
Alright, but I just want to be sure that there is no possible way for my linearisation to be anything but the one I define it to be.
So I want there to be no choice whatsoever for the identification of $T_pM$ with the tangent space to the fibre.
However, it's not clear to me how or that I can do this.
I've made several remarks.
This is what "natural" was meaning to me.
Well, no one will protest that $T_p(V) \cong V$ is not natural. You should just learn it.
I am struggling to show it for myself.
1:43 AM
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive.
(a) Both X and Y are right but not Z.
(b) Only Z is right
(c) Only X is right
(d) Neither of them is absolutely right.
Translate curves through $p$ to curves through $0$.
Normally I would set up a chart which is just an isomorphism $V\cong\mathbb R^n$ (so pick a basis).
Someone please help me with this.
@TedShifrin so just take $\beta(t) := \gamma(t) - p$
Yes, of course.
@MrAP: I disqualify the question entirely on the grounds that "neither" in (d) is totally incorrect.
1:45 AM
Why is that?
Because "neither" refers to precisely two.
that’s what ted said about showing the natural isomorphism to T_0V
I was thinking it would be either (a) or (c).
You can't rule out $Y$ if $d=1$.
(a) seems more likely to me.
1:46 AM
"more likely" isn't mathematics.
So that shows that any curve through $p$ with tangent vector $v$ at $p$ corresponds to a curve through $0$ with tangent vector $v$ at $0$.
Then you have to give an example where they're not all positive.
So translation is an isomorphism
Oh, I can give lots of examples where they're not all positive.
Well, the first you suggested.
1:49 AM
BTW, @MrAP, don't you know that the GM/AM inequality can be an equality sometimes?
Yes. I know.
Anyhow ... I'm out of here.
:( ted i luv u
@TedShifrin tchau tchau
@TedShifrin, will it be (d)?
If all variables are substituted with 1, the inequalites do not hold.
And I think the "neither" in (d) should "none".
2:04 AM
@MrAP as Ted said, if it is (d), then you should be able to find a counterexample
It is 1
If all the variables are 1 then the inequalities do not hold.
>proper fraction
A proper fraction is one of the form $a/b$ where $a < b$ and $a,b\in\mathbb Z$
Oh I forgot about that.
Its 1/2
If all the variables are 1/2, then the inequalities do not hold.
I think.
Do better than just think: calculate.
In fact, if all the variables are equal proper fractions then the inequalities do not hold.
I had calculated before writing that.
The first inequality becomes $3/2>6$ which is false and the second inequality becomes $3/2>3(16)^{1/3}$ which is false.
2:17 AM
well you are right that the inequalities do not hold, but your calculation is wrong.
Set $a=b=c=d$ and re-write your inequalities.
Oops. I am totally out of my mind. It would be $3/2>3/2$ and $3/2>3/(16)^{1/3}$.
1/16 in the last cube root
So its option (d).
If you say so.
What do you mean by that?
2:24 AM
I mean that if that is what you have shown, then that's what it is.
Not to cast you into doubt.
You don't say so?
No, I haven't looked at the question
Like the other options
2:38 AM
Ok. Fine.
1 hour later…
3:57 AM
Hi, what is meant by when you say, "a group $H$ embeds in to $Aut(C_{p}^2)=GL(2,p)$, where $C_{p}^2$ denotes the (non-cyclic) abelian group of order $p^2$ and $p$ is a prime?
Can we think about the structure of H by this, if we know the order of H too?
4:40 AM
Well, you know (or can find) the order of $GL(2,p)$, so the order of $H$ would need to divide that.
4:55 AM
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
5:13 AM
Please help me with this question. Thanks a lot in advance.
3 hours later…
8:39 AM
Man, how did it become so popular to group together windows from the same application that desktops stopped offering the option to not do that?
9:20 AM
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
9:33 AM
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
9:52 AM
So why does the Schrödinger equation look the way it does
(Also anyone else notice that ö looks like a shocked face)
$\vec F_{\Bbb Q^2}=(x,y)$
$\vec F = \Bbb Q^2 \to \Bbb Q^2$
10:51 AM
I am having hard time understanding 'unique upto associates' in the definition of unique factorization domain.
I am using this definition:
please help!
You want to say that $\Bbb Z$ is a UFD even though $6=(-2)(-3)=2\times 3$ has two distinct factorization, because they are different in a way which shouldn't matter
Hi @Balarka
Hey @Alessandro @Balarka et al
Hey @ÍgjøgnumMeg
11:04 AM
Hi @ÍgjøgnumMeg! How are you?
Hey @Alessandro, I'm alright, working atm and trying to prepare for my interview lol
How about you?
When will it be?
This monday :(
@ÍgjøgnumMeg I finished my exams so now I'm free until the new semester begins in April, I'm looking forward to it
Nice :)
11:06 AM
Oh, that's soon! What are you interviewing for?
You're ditching alg. geo?
It's for a full DAAD Scholarship
which I've been shortlisted for
@Alessandro Teach me geometric group theory lmao
@AlessandroCodenotti Thank you
@ÍgjøgnumMeg Yeah, I wanted to go to the lectures without doing the exam at the end, but it overlaps with a logic course
now that ur free i mean
11:07 AM
Ah, that's great, good luck for the scholarship!
Good luck from me as well
I think you already know more GGT than me :P
I don't know anything precisely
I'm free as in having no lectures and no exams, I'm still preparing a seminar talk for the next semester
Oh what's it on
11:09 AM
Kunen's inconsistency theorem and the large cardinals I1, I2, I3 (they ran out of cool names I guess) on the verge of inconsistency
runs away
Have you looked at the proof that word problem is solvable for hyperbolic groups?
Sounds up your alley
Kunen's inconsistency basically says that there is no nontrivial elementary embedding of the universe $V$ into itself
Yep, you have to show that they admit Dehn presentations
What's a Dehn presentation
11:16 AM
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
Let me see if I can interpret the Dehn presentation in terms of the van Kampen diagram or something.
It's some kind of cancellation condition; if my word in $F(S)$ is trivial in $G$ then there is some large subword $u_i$...
Well, if $w$ contains $u_i$ as a subword then in $G$ length of $w$ gets reduced
Because I just replace $u_i$ by $v_i$ in $w$, as $u_i = v_i$ in $G$.
Oh it's some shortening procedure for geodesics
Yeah the proof that hyperbolic groups have Dehn presentations is basically that you can take shortcuts with local geodesics but the details are quite ugly, they're in the paper I linked above
(also this is something I know nothing about but hyperbolic groups are automatic groups as well which is a much bigger class of groups with solvable word problem)
I'm leaving for lunch now, bye!
Cya! I'll think about this a bit
@AlessandroCodenotti This is equivalent to finding a geodesic representative for the word, right? The "final word" after all these reductions would be a length minimizer
11:35 AM
@Alessandro thank you :) I am just relearning a lot of the stuff from my dissertation so I can waffle about it
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
Say we take the pairs (0,100), (1,99), etc, (50,50)
Pairs of integers adding to 100
Let's do the sieve of Eratosthenes
We're left with (1,99), (3,97), (5,95), etc, (49,51)
Now 3
Notice that (1,99) dies ('cause of the 99). (3,97) doesn't die just because we don't sieve out the prime itself, but (7,93) dies and (9,91) dies
Does that make sense so far?
So we started with 51 pairs, then went to 24
and now we have (3,97), (5,95), (11,89), (17,83), etc, (47,53)
so that's 9 pairs
I realize 1 never gets sieved out by Eratosthenes, so if instead of 100 we had some number that was 1 more than a prime, it would never get sieved out
@BalarkaSen Am I making sense?
I'm doing Goldbach here
So we have a lower bound of 50(1/2)((3-2)/3) pairs surviving I think
(which works 'cause that's 8⅓ and we have 9 pairs)
Then what, we sieve out the 5s
(5,95) and (35,65) die and I think everything else stays
We're left with (3,97), (11,89), (17,83), (23,77), (29,71), (41,59), and (47,53)
That's 7 pairs
and we can get a lower bound of 50(1/2)((3-2)/3)((5-2)/5)=5 which still works
@BalarkaSen So I have someone on Reddit who is really bad at English (and communicating in general) who is convinced he has a proof of the Goldbach conjecture
and his argument is basically, do the sieve of Eratosthenes on the pairs $(k,2n-k)$, give a lower bound for the surviving pairs at each step, and show that the lower bound is never 0
like I started doing above
So I haven't really thought it 100% through yet
but right now my task is to figure out why this doesn't work
2 hours later…
1:39 PM
hi everyone, a quick question: is there a name for the "inverse" of the commutant? i.e. the problem of having a sub algebra and trying to find if it can be seen as the commutant of another algebra?
@BalarkaSen OK so I see the main problem:
once we've sieved out all the multiples of 2,3,…,$p_{k-1}$ in the range $[0,n]$, we assume that roughly $1/p_k$ of the remainder are multiples of $p_k$
and, through numerical experiments, that seems to be true
but the problem is, the numbers in the range $[0,n]$ that aren't multiples of 2,3,…$p_{k-1}$ aren't necessarily evenly distributed
so it's theoretically possible that a much higher than expected proportion of them is a multiple of $p_k$
2:12 PM
I don't understand in above proof why p
must be associate to one of the irreducibles occurring either in the factorization of a or
in the factorization of b.
Hi! I was looking for books on analysis and set theory. I saw this link but I'm not able to figure out which book this chapter is a part of. Does anyone know?
@ParasKhosla, sorry to derail the discussion. Which book are you using for complex analysis?
@Silent I'm not using any book. I took a course from Coursera offered by Wesleyan University.
I'm currently looking to research on what book to use. I found this chapter of a book which seems to be good but I can not seem to figure out which book it is
@Silent I have yet to find a really good book on complex analysis
@anakhro Is there no link for hardcopy? maybe on Amazon or elsewhere
2:25 PM
But Visual Complex Analysis is probably one of the better.
@ParasKhosla afaik these are strictly online notes.
Oh. There's a different feeling while studying from an actual book. I guess I'll have to look for other resources. Although thanks a lot
You could just print it out.
Probably cheaper than buying a book
@ParasKhosla OK! Thank you very much for suggesting this! seems that it covers many topics
Yeah that's still an option
What? @Silent the course?
@anakhro in india, it is way cheaper to by an indian edition of book, better, if you buy used copy! :)
2:29 PM
@Silent I am sure there are budget printers who allow you to print even cheaper.
They have to make money off the books somehow.
@ParasKhosla yeah. it covers conformal mapping, residue theorem etc! the core and cool topics
Yeah it's really great. Anyways are you a math student in India? @Silent
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Oh cool @Silent
@ParasKhosla, so that book does not seem be be either on set theory or analysis! Which subject does it cover?
2:36 PM
Basic algebra
Yeah perhaps that abstract algebra.
rings and groups in further chapters
@anakhro oh!
@Silent did you do an undergrad in math?
@anakhro, its my first time that I saw graphs in algebra text. I knew that graphs are closely related to (equivalent to?) topology. How does algebra relate to graph?
@anakhro well, not formally. (I regret that.)
Graphs are largely a combinatorial object. Here they seem to be just adding an additional topic of graphs at the end, despite it not really pertaining that closely with the rest of the material.
That being said, algebra sees many applications in graph theory.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants. == Branches of algebraic graph theory == === Using linear algebra === The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
2:43 PM
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Can a triangle on a surface
@anakhro Thank you very much for all this information!
@anakhro How did you guess that? :)
automorphism groups are basically where group theory developed from.
Automorphisms, and then symmetries.
Graph theory makes heavy use of both of these concepts.
@ParasKhosla, how can i increase speed in coursera video? can i see those vids in youtube, as we can do in edx?
Can you tell the geometry of a surface based on how
A triangle is curved on it
2:50 PM
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
am i right?
3:04 PM
hi chat
@Ultradark you will have to define "geometry".
Hi @vzn
hi all really jazzed about this new discovery, anyone into dynamical systems theory? has a lot of neat new math, looks breakthru or even revolutionary :)
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
but... yeah, idk why i did this.
3:15 PM
@chandx the kernel is all degree 1 polynomials, si.
But you can just calculate it easily.
Let $P(x) = a + bx + cx^2 + dx^3$, find $P''$ and $P'''$, and then find $F(P)$.
Then let your expression for $F(P) = 0$, then solve for $a,b,c,d$.
@Silent @Silent Yes you can speed up the video look at the panel on the bottom, right side maybe has the tool to modify speed
Thank you so much!
|
e326abb0f5593cb8 | The Imaginary Energy Space
Post scriptum note added on 11 July 2016: This is one of the more speculative posts which led to my e-publication analyzing the wavefunction as an energy propagation. With the benefit of hindsight, I would recommend you to immediately the more recent exposé on the matter that is being presented here, which you can find by clicking on the provided link. In addition, I see the dark force has amused himself by removing some material even here!
Original post:
Intriguing title, isn’t it? You’ll think this is going to be highly speculative and you’re right. In fact, I could also have written: the imaginary action space, or the imaginary momentum space. Whatever. It all works ! It’s an imaginary space – but a very real one, because it holds energy, or momentum, or a combination of both, i.e. action. 🙂
So the title is either going to deter you or, else, encourage you to read on. I hope it’s the latter. 🙂
In my post on Richard Feynman’s exposé on how Schrödinger got his famous wave equation, I noted an ambiguity in how he deals with the energy concept. I wrote that piece in February, and we are now May. In-between, I looked at Schrödinger’s equation from various perspectives, as evidenced from the many posts that followed that February post, which I summarized on my Deep Blue page, where I note the following:
1. The argument of the wavefunction (i.e. θ = ωt – kx = [E·t – p·x]/ħ) is just the proper time of the object that’s being represented by the wavefunction (which, in most cases, is an elementary particle—an electron, for example).
2. The 1/2 factor in Schrödinger’s equation (∂ψ/∂t = i·(ħ/2m)·∇2ψ) doesn’t make all that much sense, so we should just drop it. Writing ∂ψ/∂t = i·(m/ħ)∇2ψ (i.e. Schrödinger’s equation without the 1/2 factor) does away with the mentioned ambiguities and, more importantly, avoids obvious contradictions.
Both remarks are rather unusual—especially the second one. In fact, if you’re not shocked by what I wrote above (Schrödinger got something wrong!), then stop reading—because then you’re likely not to understand a thing of what follows. 🙂 In any case, I thought it would be good to follow up by devoting a separate post to this matter.
The argument of the wavefunction as the proper time
Frankly, it took me quite a while to see that the argument of the wavefunction is nothing but the t’ = (t − v∙x)/√(1−v2)] formula that we know from the Lorentz transformation of spacetime. Let me quickly give you the formulas (just substitute the for v):
In fact, let me be precise: the argument of the wavefunction also has the particle’s rest mass m0 in it. That mass factor (m0) appears in it as a general scaling factor, so it determines the density of the wavefunction both in time as well as in space. Let me jot it down:
ψ(x, t) = a·ei·(mv·t − p∙x) = a·ei·[(m0/√(1−v2))·t − (m0·v/√(1−v2))∙x] = a·ei·m0·(t − v∙x)/√(1−v2)
Huh? Yes. Let me show you how we get from θ = ωt – kx = [E·t – p·x]/ħ to θ = mv·t − p∙x. It’s really easy. We first need to choose our units such that the speed of light and Planck’s constant are numerically equal to one, so we write: = 1 and ħ = 1. So now the 1/ħ factor no longer appears.
[Let me note something here: using natural units does not do away with the dimensions: the dimensions of whatever is there remain what they are. For example, energy remains what it is, and so that’s force over distance: 1 joule = 1 newton·meter (1 J = 1 N·m. Likewise, momentum remains what it is: force times time (or mass times velocity). Finally, the dimension of the quantum of action doesn’t disappear either: it remains the product of force, distance and time (N·m·s). So you should distinguish between the numerical value of our variables and their dimension. Always! That’s where physics is different from algebra: the equations actually mean something!]
Now, because we’re working in natural units, the numerical value of both and cwill be equal to 1. It’s obvious, then, that Einstein’s mass-energy equivalence relation reduces from E = mvc2 to E = mv. You can work out the rest yourself – noting that p = mv·v and mv = m0/√(1−v2). Done! For a more intuitive explanation, I refer you to the above-mentioned page.
So that’s for the wavefunction. Let’s now look at Schrödinger’s wave equation, i.e. that differential equation of which our wavefunction is a solution. In my introduction, I bluntly said there was something wrong with it: that 1/2 factor shouldn’t be there. Why not?
What’s wrong with Schrödinger’s equation?
When deriving his famous equation, Schrödinger uses the mass concept as it appears in the classical kinetic energy formula: K.E. = m·v2/2, and that’s why – after all the complicated turns – that 1/2 factor is there. There are many reasons why that factor doesn’t make sense. Let me sum up a few.
[I] The most important reason is that de Broglie made it quite clear that the energy concept in his equations for the temporal and spatial frequency for the wavefunction – i.e. the ω = E/ħ and k = p/ħ relations – is the total energy, including rest energy (m0), kinetic energy (m·v2/2) and any potential energy (V). In fact, if we just multiply the two de Broglie (aka as matter-wave equations) and use the old-fashioned v = λ relation (so we write E as E = ω·ħ = (2π·f)·(h/2π) = f·h, and p as p = k·ħ = (2π/λ)·(h/2π) = h/λ and, therefore, we have = E/h and p = h/p), we find that the energy concept that’s implicit in the two matter-wave equations is equal to E = m∙v2, as shown below:
2. v = λ ⇒ f·λ = v = E/p ⇔ E = v·p = v·(m·v) ⇒ E = m·v2
Huh? E = m∙v2? Yes. Not E = m∙c2 or m·v2/2 or whatever else you might be thinking of. In fact, this E = m∙v2 formula makes a lot of sense in light of the two following points.
Skeptical note: You may – and actually should – wonder whether we can use that v = λ relation for a wave like this, i.e. a wave with both a real (cos(-θ)) as well as an imaginary component (i·sin(-θ). It’s a deep question, and I’ll come back to it later. But… Yes. It’s the right question to ask. 😦
[II] Newton told us that force is mass time acceleration. Newton’s law is still valid in Einstein’s world. The only difference between Newton’s and Einstein’s world is that, since Einstein, we should treat the mass factor as a variable as well. We write: F = mv·a = mv·= [m0/√(1−v2)]·a. This formula gives us the definition of the newton as a force unit: 1 N = 1 kg·(m/s)/s = 1 kg·m/s2. [Note that the 1/√(1−v2) factor – i.e. the Lorentz factor (γ) – has no dimension, because is measured as a relative velocity here, i.e. as a fraction between 0 and 1.]
Now, you’ll agree the definition of energy as a force over some distance is valid in Einstein’s world as well. Hence, if 1 joule is 1 N·m, then 1 J is also equal to 1 (kg·m/s2)·m = 1 kg·(m2/s2), so this also reflects the E = m∙v2 concept. [I can hear you mutter: that kg factor refers to the rest mass, no? No. It doesn’t. The kg is just a measure of inertia: as a unit, it applies to both mas well as mv. Full stop.]
Very skeptical note: You will say this doesn’t prove anything – because this argument just shows the dimensional analysis for both equations (i.e. E = m∙v2 and E = m∙c2) is OK. Hmm… Yes. You’re right. 🙂 But the next point will surely convince you! 🙂
[III] The third argument is the most intricate and the most beautiful at the same time—not because it’s simple (like the arguments above) but because it gives us an interpretation of what’s going on here. It’s fairly easy to verify that Schrödinger’s equation, ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation (including the 1/2 factor to which I object), is equivalent to the following set of two equations:
1. Re(∂ψ/∂t) = −(ħ/2m)·Im(∇2ψ)
2. Im(∂ψ/∂t) = (ħ/2m)·Re(∇2ψ)
[In case you don’t see it immediately, note that two complex numbers a + i·b and c + i·d are equal if, and only if, their real and imaginary parts are the same. However, here we have something like this: a + i·b = i·(c + i·d) = i·c + i2·d = − d + i·c (remember i= −1).]
Now, before we proceed (i.e. before I show you what’s wrong here with that 1/2 factor), let us look at the dimensions first. For that, we’d better analyze the complete Schrödinger equation so as to make sure we’re not doing anything stupid here by looking at one aspect of the equation only. The complete equation, in its original form, is:
schrodinger 5
Notice that, to simplify the analysis above, I had moved the and the ħ on the left-hand side to the right-hand side (note that 1/= −i, so −(ħ2/2m)/(i·ħ) = ħ/2m). Now, the ħfactor on the right-hand side is expressed in J2·s2. Now that doesn’t make much sense, but then that mass factor in the denominator makes everything come out alright. Indeed, we can use the mass-equivalence relation to express m in J/(m/s)2 units. So our ħ2/2m coefficient is expressed in (J2·s2)/[J/(m/s)2] = J·m2. Now we multiply that by that Laplacian operating on some scalar, which yields some quantity per square meter. So the whole right-hand side becomes some amount expressed in joule, i.e. the unit of energy! Interesting, isn’t it?
On the left-hand side, we have i and ħ. We shouldn’t worry about the imaginary unit because we can treat that as just another number, albeit a very special number (because its square is minus 1). However, in this equation, it’s like a mathematical constant and you can think of it as something like π or e. [Think of the magical formula: eiπ = i2 = −1.] In contrast, ħ is a physical constant, and so that constant comes with some dimension and, therefore, we cannot just do what we want. [I’ll show, later, that even moving it to the other side of the equation comes with interpretation problems, so be careful with physical constants, as they really mean something!] In this case, its dimension is the action dimension: J·s = N·m·s, so that’s force times distance times time. So we multiply that with a time derivative and we get joule once again (N·m·s/s = N·m = J), so that’s the unit of energy. So it works out: we have joule units both left and right in Schrödinger’s equation. Nice! Yes. But what does it mean? 🙂
Well… You know that we can – and should – think of Schrödinger’s equation as a diffusion equation – just like a heat diffusion equation, for example – but then one describing the diffusion of a probability amplitude. [In case you are not familiar with this interpretation, please do check my post on it, or my Deep Blue page.] But then we didn’t describe the mechanism in very much detail, so let me try to do that now and, in the process, finally explain the problem with the 1/2 factor.
The missing energy
There are various ways to explain the problem. One of them involves calculating group and phase velocities of the elementary wavefunction satisfying Schrödinger’s equation but that’s a more complicated approach and I’ve done that elsewhere, so just click the reference if you prefer the more complicated stuff. I find it easier to just use those two equations above:
The argument is the following: if our elementary wavefunction is equal to ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt), then it’s easy to proof that this pair of conditions is fulfilled if, and only if, ω = k2·(ħ/2m). [Note that I am omitting the normalization coefficient in front of the wavefunction: you can put it back in if you want. The argument here is valid, with or without normalization coefficients.] Easy? Yes. Check it out. The time derivative on the left-hand side is equal to:
∂ψ/∂t = −iω·iei(kx − ωt) = ω·[cos(kx − ωt) + i·sin(kx − ωt)] = ω·cos(kx − ωt) + iω·sin(kx − ωt)
And the second-order derivative on the right-hand side is equal to:
2ψ = ∂2ψ/∂x= i·k2·ei(kx − ωt) = k2·cos(kx − ωt) + i·k2·sin(kx − ωt)
So the two equations above are equivalent to writing:
1. Re(∂ψB/∂t) = −(ħ/2m)·Im(∇2ψB) ⇔ ω·cos(kx − ωt) = k2·(ħ/2m)·cos(kx − ωt)
2. Im(∂ψB/∂t) = (ħ/2m)·Re(∇2ψB) ⇔ ω·sin(kx − ωt) = k2·(ħ/2m)·sin(kx − ωt)
So both conditions are fulfilled if, and only if, ω = k2·(ħ/2m). You’ll say: so what? Well… We have a contradiction here—something that doesn’t make sense. Indeed, the second of the two de Broglie equations (always look at them as a pair) tells us that k = p/ħ, so we can re-write the ω = k2·(ħ/2m) condition as:
ω/k = vp = k2·(ħ/2m)/k = k·ħ/(2m) = (p/ħ)·(ħ/2m) = p/2m ⇔ p = 2m
You’ll say: so what? Well… Stop reading, I’d say. That p = 2m doesn’t make sense—at all! Nope! In fact, if you thought that the E = m·v2 is weird—which, I hope, is no longer the case by now—then… Well… This p = 2m equation is much weirder. In fact, it’s plain nonsense: this condition makes no sense whatsoever. The only way out is to remove the 1/2 factor, and to re-write the Schrödinger equation as I wrote it, i.e. with an ħ/m coefficient only, rather than an (1/2)·(ħ/m) coefficient.
Huh? Yes.
As mentioned above, I could do those group and phase velocity calculations to show you what rubbish that 1/2 factor leads to – and I’ll do that eventually – but let me first find yet another way to present the same paradox. Let’s simplify our life by choosing our units such that = ħ = 1, so we’re using so-called natural units rather than our SI units. [Again, note that switching to natural units doesn’t do anything to the physical dimensions: a force remains a force, a distance remains a distance, and so on.] Our mass-energy equivalence then becomes: E = m·c= m·1= m. [Again, note that switching to natural units doesn’t do anything to the physical dimensions: a force remains a force, a distance remains a distance, and so on. So we’d still measure energy and mass in different but equivalent units. Hence, the equality sign should not make you think mass and energy are actually the same: energy is energy (i.e. force times distance), while mass is mass (i.e. a measure of inertia). I am saying this because it’s important, and because it took me a while to make these rather subtle distinctions.]
Let’s now go one step further and imagine a hypothetical particle with zero rest mass, so m0 = 0. Hence, all its energy is kinetic and so we write: K.E. = mv·v/2. Now, because this particle has zero rest mass, the slightest acceleration will make it travel at the speed of light. In fact, we would expect it to travel at the speed, so mv = mc and, according to the mass-energy equivalence relation, its total energy is, effectively, E = mv = mc. However, we just said its total energy is kinetic energy only. Hence, its total energy must be equal to E = K.E. = mc·c/2 = mc/2. So we’ve got only half the energy we need. Where’s the other half? Where’s the missing energy? Quid est veritas? Is its energy E = mc or E = mc/2?
It’s just a paradox, of course, but one we have to solve. Of course, we may just say we trust Einstein’s E = m·c2 formula more than the kinetic energy formula, but that answer is not very scientific. 🙂 We’ve got a problem here and, in order to solve it, I’ve come to the following conclusion: just because of its sheer existence, our zero-mass particle must have some hidden energy, and that hidden energy is also equal to E = m·c2/2. Hence, the kinetic and the hidden energy add up to E = m·c2 and all is alright.
Huh? Hidden energy? I must be joking, right?
Well… No. Let me explain. Oh. And just in case you wonder why I bother to try to imagine zero-mass particles. Let me tell you: it’s the first step towards finding a wavefunction for a photon and, secondly, you’ll see it just amounts to modeling the propagation mechanism of energy itself. 🙂
The hidden energy as imaginary energy
I am tempted to refer to the missing energy as imaginary energy, because it’s linked to the imaginary part of the wavefunction. However, it’s anything but imaginary: it’s as real as the imaginary part of the wavefunction. [I know that sounds a bit nonsensical, but… Well… Think about it. And read on!]
Back to that factor 1/2. As mentioned above, it also pops up when calculating the group and the phase velocity of the wavefunction. In fact, let me show you that calculation now. [Sorry. Just hang in there.] It goes like this.
The de Broglie relations tell us that the k and the ω in the ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) wavefunction (i.e. the spatial and temporal frequency respectively) are equal to k = p/ħ, and ω = E/ħ. Let’s now think of that zero-mass particle once more, so we assume all of its energy is kinetic: no rest energy, no potential! So… If we now use the kinetic energy formula E = m·v2/2 – which we can also write as E = m·v·v/2 = p·v/2 = p·p/2m = p2/2m, with v = p/m the classical velocity of the elementary particle that Louis de Broglie was thinking of – then we can calculate the group velocity of our ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) wavefunction as:
vg = ∂ω/∂k = ∂[E/ħ]/∂[p/ħ] = ∂E/∂p = ∂[p2/2m]/∂p = 2p/2m = p/m = v
[Don’t tell me I can’t treat m as a constant when calculating ∂ω/∂k: I can. Think about it.]
Fine. Now the phase velocity. For the phase velocity of our ei(kx − ωt) wavefunction, we find:
vp = ω/k = (E/ħ)/(p/ħ) = E/p = (p2/2m)/p = p/2m = v/2
So that’s only half of v: it’s the 1/2 factor once more! Strange, isn’t it? Why would we get a different value for the phase velocity here? It’s not like we have two different frequencies here, do we? Well… No. You may also note that the phase velocity turns out to be smaller than the group velocity (as mentioned, it’s only half of the group velocity), which is quite exceptional as well! So… Well… What’s the matter here? We’ve got a problem!
What’s going on here? We have only one wave here—one frequency and, hence, only one k and ω. However, on the other hand, it’s also true that the ei(kx − ωt) wavefunction gives us two functions for the price of one—one real and one imaginary: ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt). So the question here is: are we adding waves, or are we not? It’s a deep question. If we’re adding waves, we may get different group and phase velocities, but if we’re not, then… Well… Then the group and phase velocity of our wave should be the same, right? The answer is: we are and we aren’t. It all depends on what you mean by ‘adding’ waves. I know you don’t like that answer, but that’s the way it is, really. 🙂
Let me make a small digression here that will make you feel even more confused. You know – or you should know – that the sine and the cosine function are the same except for a phase difference of 90 degrees: sinθ = cos(θ + π/2). Now, at the same time, multiplying something with amounts to a rotation by 90 degrees, as shown below.
Hence, in order to sort of visualize what our ei(kx − ωt) function really looks like, we may want to super-impose the two graphs and think of something like this:
You’ll have to admit that, when you see this, our formulas for the group or phase velocity, or our v = λ relation, do no longer make much sense, do they? 🙂
Having said that, that 1/2 factor is and remains puzzling, and there must be some logical reason for it. For example, it also pops up in the Uncertainty Relations:
Δx·Δp ≥ ħ/2 and ΔE·Δt ≥ ħ/2
So we have ħ/2 in both, not ħ. Why do we need to divide the quantum of action here? How do we solve all these paradoxes? It’s easy to see how: the apparent contradiction (i.e. the different group and phase velocity) gets solved if we’d use the E = m∙v2 formula rather than the kinetic energy E = m∙v2/2. But then… What energy formula is the correct one: E = m∙v2 or m∙c2? Einstein’s formula is always right, isn’t it? It must be, so let me postpone the discussion a bit by looking at a limit situation. If v = c, then we don’t need to make a choice, obviously. 🙂 So let’s look at that limit situation first. So we’re discussing our zero-mass particle once again, assuming it travels at the speed of light. What do we get?
Well… Measuring time and distance in natural units, so c = 1, we have:
E = m∙c2 = m and p = m∙c = m, so we get: E = m = p
Waw ! E = m = p ! What a weird combination, isn’t it? Well… Yes. But it’s fully OK. [You tell me why it wouldn’t be OK. It’s true we’re glossing over the dimensions here, but natural units are natural units and, hence, the numerical value of c and c2 is 1. Just figure it out for yourself.] The point to note is that the E = m = p equality yields extremely simple but also very sensible results. For the group velocity of our ei(kx − ωt) wavefunction, we get:
vg = ∂ω/∂k = ∂[E/ħ]/∂[p/ħ] = ∂E/∂p = ∂p/∂p = 1
So that’s the velocity of our zero-mass particle (remember: the 1 stands for c here, i.e. the speed of light) expressed in natural units once more—just like what we found before. For the phase velocity, we get:
vp = ω/k = (E/ħ)/(p/ħ) = E/p = p/p = 1
Same result! No factor 1/2 here! Isn’t that great? My ‘hidden energy theory’ makes a lot of sense.:-)
However, if there’s hidden energy, we still need to show where it’s hidden. 🙂 Now that question is linked to the propagation mechanism that’s described by those two equations, which now – leaving the 1/2 factor out, simplify to:
1. Re(∂ψ/∂t) = −(ħ/m)·Im(∇2ψ)
2. Im(∂ψ/∂t) = (ħ/m)·Re(∇2ψ)
Propagation mechanism? Yes. That’s what we’re talking about here: the propagation mechanism of energy. Huh? Yes. Let me explain in another separate section, so as to improve readability. Before I do, however, let me add another note—for the skeptics among you. 🙂
Indeed, the skeptics among you may wonder whether our zero-mass particle wavefunction makes any sense at all, and they should do so for the following reason: if x = 0 at t = 0, and it’s traveling at the speed of light, then x(t) = t. Always. So if E = m = p, the argument of our wavefunction becomes E·t – p·x = E·t – E·t = 0! So what’s that? The proper time of our zero-mass particle is zero—always and everywhere!?
Well… Yes. That’s why our zero-mass particle – as a point-like object – does not really exist. What we’re talking about is energy itself, and its propagation mechanism. 🙂
While I am sure that, by now, you’re very tired of my rambling, I beg you to read on. Frankly, if you got as far as you have, then you should really be able to work yourself through the rest of this post. 🙂 And I am sure that – if anything – you’ll find it stimulating! 🙂
The imaginary energy space
Look at the propagation mechanism for the electromagnetic wave in free space, which (for = 1) is represented by the following two equations:
1. B/∂t = –∇×E
2. E/∂t = ∇×B
[In case you wonder, these are Maxwell’s equations for free space, so we have no stationary nor moving charges around.] See how similar this is to the two equations above? In fact, in my Deep Blue page, I use these two equations to derive the quantum-mechanical wavefunction for the photon (which is not the same as that hypothetical zero-mass particle I introduced above), but I won’t bother you with that here. Just note the so-called curl operator in the two equations above (∇×) can be related to the Laplacian we’ve used so far (∇2). It’s not the same thing, though: for starters, the curl operator operates on a vector quantity, while the Laplacian operates on a scalar (including complex scalars). But don’t get distracted now. Let’s look at the revised Schrödinger’s equation, i.e. the one without the 1/2 factor:
∂ψ/∂t = i·(ħ/m)·∇2ψ
On the left-hand side, we have a time derivative, so that’s a flow per second. On the right-hand side we have the Laplacian and the i·ħ/m factor. Now, written like this, Schrödinger’s equation really looks exactly the same as the general diffusion equation, which is written as: ∂φ/∂t = D·∇2φ, except for the imaginary unit, which makes it clear we’re getting two equations for the price of one here, rather than one only! 🙂 The point is: we may now look at that ħ/m factor as a diffusion constant, because it does exactly the same thing as the diffusion constant D in the diffusion equation ∂φ/∂t = D·∇2φ, i.e:
1. As a constant of proportionality, it quantifies the relationship between both derivatives.
2. As a physical constant, it ensures the dimensions on both sides of the equation are compatible.
So the diffusion constant for Schrödinger’s equation is ħ/m. What is its dimension? That’s easy: (N·m·s)/(N·s2/m) = m2/s. [Remember: 1 N = 1 kg·m/s2.] But then we multiply it with the Laplacian, so that’s something expressed per square meter, so we get something per second on both sides.
Of course, you wonder: what per second? Not sure. That’s hard to say. Let’s continue with our analogy with the heat diffusion equation so as to try to get a better understanding of what’s being written here. Let me give you that heat diffusion equation here. Assuming the heat per unit volume (q) is proportional to the temperature (T) – which is the case when expressing T in degrees Kelvin (K), so we can write q as q = k·T – we can write it as:
heat diffusion 2
So that’s structurally similar to Schrödinger’s equation, and to the two equivalent equations we jotted down above. So we’ve got T (temperature) in the role of ψ here—or, to be precise, in the role of ψ ‘s real and imaginary part respectively. So what’s temperature? From the kinetic theory of gases, we know that temperature is not just a scalar: temperature measures the mean (kinetic) energy of the molecules in the gas. That’s why we can confidently state that the heat diffusion equation models an energy flow, both in space as well as in time.
Let me make the point by doing the dimensional analysis for that heat diffusion equation. The time derivative on the left-hand side (∂T/∂t) is expressed in K/s (Kelvin per second). Weird, isn’t it? What’s a Kelvin per second? Well… Think of a Kelvin as some very small amount of energy in some equally small amount of space—think of the space that one molecule needs, and its (mean) energy—and then it all makes sense, doesn’t it?
However, in case you find that a bit difficult, just work out the dimensions of all the other constants and variables. The constant in front (k) makes sense of it. That coefficient (k) is the (volume) heat capacity of the substance, which is expressed in J/(m3·K). So the dimension of the whole thing on the left-hand side (k·∂T/∂t) is J/(m3·s), so that’s energy (J) per cubic meter (m3) and per second (s). Nice, isn’t it? What about the right-hand side? On the right-hand side we have the Laplacian operator – i.e. ∇= ·, with ∇ = (∂/∂x, ∂/∂y, ∂/∂z) – operating on T. The Laplacian operator, when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it’s operating on T, so the dimension of ∇2T is K/m2. Again, that doesn’t tell us very much (what’s the meaning of a Kelvin per square meter?) but we multiply it by the thermal conductivity (κ), whose dimension is W/(m·K) = J/(m·s·K). Hence, the dimension of the product is the same as the left-hand side: J/(m3·s). So that’s OK again, as energy (J) per cubic meter (m3) and per second (s) is definitely something we can associate with an energy flow.
In fact, we can play with this. We can bring k from the left- to the right-hand side of the equation, for example. The dimension of κ/k is m2/s (check it!), and multiplying that by K/m(i.e. the dimension of ∇2T) gives us some quantity expressed in Kelvin per second, and so that’s the same dimension as that of ∂T/∂t. Done!
In fact, we’ve got two different ways of writing Schrödinger’s diffusion equation. We can write it as ∂ψ/∂t = i·(ħ/m)·∇2ψ or, else, we can write it as ħ·∂ψ/∂t = i·(ħ2/m)·∇2ψ. Does it matter? I don’t think it does. The dimensions come out OK in both cases. However, interestingly, if we do a dimensional analysis of the ħ·∂ψ/∂t = i·(ħ2/m)·∇2ψ equation, we get joule on both sides. Interesting, isn’t it? The key question, of course, is: what is it that is flowing here?
I don’t have a very convincing answer to that, but the answer I have is interesting—I think. 🙂 Think of the following: we can multiply Schrödinger’s equation with whatever we want, and then we get all kinds of flows. For example, if we multiply both sides with 1/(m2·s) or 1/(m3·s), we get a equation expressing the energy conservation law, indeed! [And you may want to think about the minus sign of the right-hand side of Schrödinger’s equation now, because it makes much more sense now!]
We could also multiply both sides with s, so then we get J·s on both sides, i.e. the dimension of physical action (J·s = N·m·s). So then the equation expresses the conservation of actionHuh? Yes. Let me re-phrase that: then it expresses the conservation of angular momentum—as you’ll surely remember that the dimension of action and angular momentum are the same. 🙂
And then we can divide both sides by m, so then we get N·s on both sides, so that’s momentum. So then Schrödinger’s equation embodies the momentum conservation law.
Isn’t it just wonderfulSchrödinger’s equation packs all of the conservation laws!:-) The only catch is that it flows back and forth from the real to the imaginary space, using that propagation mechanism as described in those two equations.
Now that is really interesting, because it does provide an explanation – as fuzzy as it may seem – for all those weird concepts one encounters when studying physics, such as the tunneling effect, which amounts to energy flowing from the imaginary space to the real space and, then, inevitably, flowing back. It also allows for borrowing time from the imaginary space. Hmm… Interesting! [I know I still need to make these points much more formally, but… Well… You kinda get what I mean, don’t you?]
To conclude, let me re-baptize my real and imaginary ‘space’ by referring to them to what they really are: a real and imaginary energy space respectively. Although… Now that I think of it: it could also be real and imaginary momentum space, or a real and imaginary action space. Hmm… The latter term may be the best. 🙂
Isn’t this all great? I mean… I could go on and on—but I’ll stop here, so you can freewheel around yourself. For example, you may wonder how similar that energy propagation mechanism actually is as compared to the propagation mechanism of the electromagnetic wave? The answer is: very similar. You can check how similar in one of my posts on the photon wavefunction or, if you’d want a more general argument, check my Deep Blue page. Have fun exploring! 🙂
So… Well… That’s it, folks. I hope you enjoyed this post—if only because I really enjoyed writing it. 🙂
OK. You’re right. I still haven’t answered the fundamental question.
So what about the 1/2 factor?
What about that 1/2 factor? Did Schrödinger miss it? Well… Think about it for yourself. First, I’d encourage you to further explore that weird graph with the real and imaginary part of the wavefunction. I copied it below, but with an added 45º line—yes, the green diagonal. To make it somewhat more real, imagine you’re the zero-mass point-like particle moving along that line, and we observe you from our inertial frame of reference, using equivalent time and distance units.
spacetime travel
So we’ve got that cosine (cosθ) varying as you travel, and we’ve also got the i·sinθ part of the wavefunction going while you’re zipping through spacetime. Now, THINK of it: the phase velocity of the cosine bit (i.e. the red graph) contributes as much to your lightning speed as the i·sinθ bit, doesn’t it? Should we apply Pythagoras’ basic r2 = x2 + yTheorem here? Yes: the velocity vector along the green diagonal is going to be the sum of the velocity vectors along the horizontal and vertical axes. So… That’s great.
Yes. It is. However, we still have a problem here: it’s the velocity vectors that add up—not their magnitudes. Indeed, if we denote the velocity vector along the green diagonal as u, then we can calculate its magnitude as:
u = √u2 = √[(v/2)2 + (v/2)2] = √[2·(v2/4) = √[v2/2] = v/√2 ≈ 0.7·v
So, as mentioned, we’re adding the vectors, but not their magnitudes. We’re somewhat better off than we were in terms of showing that the phase velocity of those sine and cosine velocities add up—somehow, that is—but… Well… We’re not quite there.
Fortunately, Einstein saves us once again. Remember we’re actually transforming our reference frame when working with the wavefunction? Well… Look at the diagram below (for which I thank the author)
special relativity
In fact, let me insert an animated illustration, which shows what happens when the velocity goes up and down from (close to) −c to +c and back again. It’s beautiful, and I must credit the author here too. It sort of speaks for itself, but please do click the link as the accompanying text is quite illuminating. 🙂
The point is: for our zero-mass particle, the x’ and t’ axis will rotate into the diagonal itself which, as I mentioned a couple of times already, represents the speed of light and, therefore, our zero-mass particle traveling at c. It’s obvious that we’re now adding two vectors that point in the same direction and, hence, their magnitudes just add without any square root factor. So, instead of u = √[(v/2)2 + (v/2)2], we just have v/2 + v/2 = v! Done! We solved the phase velocity paradox! 🙂
So… I still haven’t answered that question. Should that 1/2 factor in Schrödinger’s equation be there or not? The answer is, obviously: yes. It should be there. And as for Schrödinger using the mass concept as it appears in the classical kinetic energy formula: K.E. = m·v2/2… Well… What other mass concept would he use? I probably got a bit confused with Feynman’s exposé – especially this notion of ‘choosing the zero point for the energy’ – but then I should probably just re-visit the thing and adjust the language here and there. But the formula is correct.
Thinking it all through, the ħ/2m constant in Schrödinger’s equation should be thought of as the reciprocal of m/(ħ/2). So what we’re doing basically is measuring the mass of our object in units of ħ/2, rather than units of ħ. That makes perfect sense, if only because it’s ħ/2, rather than ħthe factor that appears in the Uncertainty Relations Δx·Δp ≥ ħ/2 and ΔE·Δt ≥ ħ/2. In fact, in my post on the wavefunction of the zero-mass particle, I noted its elementary wavefunction should use the m = E = p = ħ/2 values, so it becomes ψ(x, t) = a·ei∙[(ħ/2)∙t − (ħ/2)∙x]/ħ = a·ei∙[t − x]/2.
Isn’t that just nice? 🙂 I need to stop here, however, because it looks like this post is becoming a book. Oh—and note that nothing what I wrote above discredits my ‘hidden energy’ theory. On the contrary, it confirms it. In fact, the nice thing about those illustrations above is that it associates the imaginary component of our wavefunction with travel in time, while the real component is associated with travel in space. That makes our theory quite complete: the ‘hidden’ energy is the energy that moves time forward. The only thing I need to do is to connect it to that idea of action expressing itself in time or in space, cf. what I wrote on my Deep Blue page: we can look at the dimension of Planck’s constant, or at the concept of action in general, in two very different ways—from two different perspectives, so to speak:
1. [Planck’s constant] = [action] = N∙m∙s = (N∙m)∙s = [energy]∙[time]
Hmm… I need to combine that with the idea of the quantum vacuum, i.e. the mathematical space that’s associated with time and distance becoming countable variables…. In any case. Next time. 🙂
Before I sign off, however, let’s quickly check if our a·ei∙[t − x]/2 wavefunction solves the Schrödinger equation:
• ∂ψ/∂t = −a·ei∙[t − x]/2·(i/2)
• 2ψ = ∂2[a·ei∙[t − x]/2]/∂x= ∂[a·ei∙[t − x]/2·(i/2)]/∂x = −a·ei∙[t − x]/2·(1/4)
So the ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation becomes:
a·ei∙[t − x]/2·(i/2) = −i·(ħ/[2·(ħ/2)])·a·ei∙[t − x]/2·(1/4)
⇔ 1/2 = 1/4 !?
The damn 1/2 factor. Schrödinger wants it in his wave equation, but not in the wavefunction—apparently! So what if we take the m = E = p = ħ solution? We get:
• ∂ψ/∂t = −a·i·ei∙[t − x]
• 2ψ = ∂2[a·ei∙[t − x]]/∂x= ∂[a·i·ei∙[t − x]]/∂x = −a·ei∙[t − x]
So the ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation now becomes:
a·i·ei∙[t − x] = −i·(ħ/[2·ħ])·a·ei∙[t − x]
⇔ 1 = 1/2 !?
We’re still in trouble! So… Was Schrödinger wrong after all? There’s no difficulty whatsoever with the ∂ψ/∂t = i·(ħ/m)·∇2ψ equation:
• a·ei∙[t − x]/2·(i/2) = −i·[ħ/(ħ/2)]·a·ei∙[t − x]/2·(1/4) ⇔ 1 = 1
• a·i·ei∙[t − x] = −i·(ħ/ħ)·a·ei∙[t − x] ⇔ 1 = 1
What these equations might tell us is that we should measure mass, energy and momentum in terms of ħ (and not in terms of ħ/2) but that the fundamental uncertainty is ± ħ/2. That solves it all. So the magnitude of the uncertainty is ħ but it separates not 0 and ± 1, but −ħ/2 and −ħ/2. Or, more generally, the following series:
…, −7ħ/2, −5ħ/2, −3ħ/2, −ħ/2, +ħ/2, +3ħ/2,+5ħ/2, +7ħ/2,…
Why are we not surprised? The series represent the energy values that a spin one-half particle can possibly have, and ordinary matter – i.e. all fermions – is composed of spin one-half particles.
To conclude this post, let’s see if we can get any indication on the energy concepts that Schrödinger’s revised wave equation implies. We’ll do so by just calculating the derivatives in the ∂ψ/∂t = i·(ħ/m)·∇2ψ equation (i.e. the equation without the 1/2 factor). Let’s also not assume we’re measuring stuff in natural units, so our wavefunction is just what it is: a·ei·[E·t − p∙x]/ħ. The derivatives now become:
• ∂ψ/∂t = −a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ
• 2ψ = ∂2[a·ei∙[E·t − p∙x]/ħ]/∂x= ∂[a·i·(p/ħ)·ei∙[E·t − p∙x]/ħ]/∂x = −a·(p22ei∙[E·t − p∙x]/ħ
So the ∂ψ/∂t = i·(ħ/m)·∇2ψ = i·(1/m)·∇2ψ equation now becomes:
a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ = −i·(ħ/m)·a·(p22ei∙[E·t − p∙x]/ħ ⇔ E = p2/m = m·v2
It all works like a charm. Note that we do not assume stuff like E = m = p here. It’s all quite general. Also note that the E = p2/m closely resembles the kinetic energy formula one often sees: K.E. = m·v2/2 = m·m·v2/(2m) = p2/(2m). We just don’t have the 1/2 factor in our E = p2/m formula, which is great—because we don’t want it! :-) Of course, if you’d add the 1/2 factor in Schrödinger’s equation again, you’d get it back in your energy formula, which would just be that old kinetic energy formula which gave us all these contradictions and ambiguities. 😦
Finally, and just to make sure: let me add that, when we wrote that E = m = p – like we did above – we mean their numerical values are the same. Their dimensions remain what they are, of course. Just to make sure you get that subtle point, we’ll do a quick dimensional analysis of that E = p2/m formula:
[E] = [p2/m] ⇔ N·m = N2·s2/kg = N2·s2/[N·m/s2] = N·m = joule (J)
So… Well… It’s all perfect. 🙂
Post scriptum: I revised my Deep Blue page after writing this post, and I think that a number of the ideas that I express above are presented more consistently and coherently there. In any case, the missing energy theory makes sense. Think of it: any oscillator involves both kinetic as well as potential energy, and they both add up to twice the average kinetic (or potential) energy. So why not here? When everything is said and done, our elementary wavefunction does describe an oscillator. 🙂
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here: |
a21cd1aa97ab20d3 | Sunday, July 6, 2014
Why Jesus died many times for our sins
St. Augustine was sure that Jesus died just once for our sins. However, Jesus died not only in our particular universe but also in many other parallel universes that are as real as ours.
Let’s explore the chain of reasoning behind this claim. One assumption is that whether a particular parallel universe exists falls within the field of astrophysics, not theology nor logic.
Astrophysics’ well-accepted Big Bang theory with eternal inflation implies a multiverse containing an unlimited number of parallel universes obeying the same scientific laws as in our particular universe. These other universes (which the physicist Max Tegmark calls Type 1 universes) are distant parts of physical reality. They are not abstract objects. Some contain flesh and blood beings.
Parallel universes are not parallel to anything. They are very similar to what David Lewis called possible worlds, but they aren’t the same because his possible worlds must be spatiotemporally disconnected from each other.
I cannot state specific criteria for transuniverse identity, but we do need the assumption that, in a universe, personal identity (whatever it is) supervenes on the physical realm. That is, a person can’t change without something physical changing. It is also reasonable to require that in any parallel universe in which Jesus exists he has Mary and Joseph as parents.
The claim that Jesus in our universe is identical to Jesus in another universe does conflict with the intuitively plausible metaphysical principle that a physical object is not wholly in two places at once. This principle is useful to accept in our ordinary experience, but it is not accepted in contemporary physics. The Schrödinger equation of quantum field theory describes the extent to which a particle is wholly in many places at once. This is why physicists prefer to say the nucleus of a hydrogen atom is surrounded by an electron cloud rather than by an electron. In the double-slit interference experiment, a single particle goes through two slits at the same time. So, the metaphysical principle should not be used a priori to refute our claim about the transuniverse identity of Jesus.
Our universe is the product of our Big Bang that occurred 13.8 billion years ago. It is approximately that part of physical reality we can observe, which is an expanding sphere with the Earth at the center, having a radius of 13.8 billion light years.
Our universe once was a tiny bit of explosively inflating material. The energy causing the inflation was transformed into a dense gas of expanding hot radiation. This expansion has never stopped. But with expansion came cooling, and this allowed individual material particles to condense from the cooling radiation and eventually to clump into atoms and stars and then into Jesus.
The other Type 1 parallel universes have their own Big Bangs, but they are currently not observable from Earth. However, they are expanding and might eventually penetrate each other. But, they might not. It all depends on whether inflation of dark energy is creating intervening space among the universes faster than the universes can expand toward each other. Scientists don’t have a clear understanding of which is the case.
Why trust the Big Bang theory with eternal inflation? Is it even scientific, or is it mere metaphysical speculation? The crude answer is that the theory has no better competitors, and it is has been indirectly tested successfully. Its testable implications are, for example, that the results of measuring cosmic microwave-background radiation reaching Earth should have certain specific quantitative features. These features have been discovered—some only in the last five years. The theory also implies a multiverse of parallel universes having our known laws of science but perhaps different histories. If we accept a theory for its testable implications, then it would be a philosophical mistake not to accept its other implications.
One other important assumption being made is that the cosmic microwave-background experiments have not detected any overall curvature in our universe because our universe is in fact not curved. Our universe being curved but finite is also consistent with all our observations. Similarly, if you are standing on a very large globe, it can look flat to you. If our 3-D universe is finite but curved like the surface of a 4-D hypersphere, then space would be extremely large with a very small curvature, but there would be only a finite number of parallel universes, and the argument about Jesus would break down. The most common assumption now among astrophysicists is that our universe is in fact infinite, the multiverse is infinite, and matter is approximately uniformly distributed throughout the multiverse. As Max Tegmark has pointed out, twenty years ago there were many astrophysicists opposed to parallel universes. They would say, “The idea is ridiculous, and I hate it.” Now, there are few opponents of parallel universes, and they say, “I hate it.”
Having established that there are infinitely many parallel universes with the same laws but perhaps different histories, let’s return to the issue of whether Jesus died in more than one of them. One implication of the Big Bang theory with eternal inflation is that some universes are exact duplicates of each other. Here is why. If you shuffle a deck of playing cards enough times, then eventually you will have duplicate orderings. The duplicate orderings are the same, not just “David Lewis counterparts.” Similarly, if you have enough finite universes, which are just patterns of elementary particles, and each has a finite number of possible quantum states, then every universe has an infinite number of duplicates.
One controversial assumption used here is the holographic principle: Even if spacetime were continuous, it is effectively discrete or pixilated at the Planck level. This means that it can make no effective difference to anything if an object is at position x meters as opposed to position x + 10 -35 meters.
This completes the analysis of the chain of reasoning for why Jesus died more than once for our sins. Have you noticed any weak links?
Brad Dowden
Department of Philosophy
Sacramento State
1. Brad, this is very interesting and fun, thanks. I could talk to you about this all day, but I'll confine my question to the idea that a particle can not be in two places at once.
First, I wonder what it means to say that "The Schrödinger equation of quantum field theory describes the extent to which a particle is wholly in many places at once." If someone asks. Is particle P wholly at location L? The answer "To some extent," seems to mean the same thing as "No," (on the assumption that some extent is less then wholly." My understanding of the Schrödinger equation is that it tells us the probability that a particle is at any particular location, with the meaning of that statement varying depending on the resolution of the measurement problem you favor. Your remark seems to me to be most consistent with the Many Worlds Interpretation, where the probabilities represented in the Schrödinger equation may reflect the distinct worlds that actually exist. However, you do not make any reference to the Many Worlds Interpretation, and as far as I know, physicists are currently not at all sure what the relation is between the Multiverse and Many Worlds (though Sean Carroll thinks they could be the same:
The other question I have is whether we really need to go this route at all. Kant, I believe, would have said that the idea that an object can't be wholly in two places at once is an a priori intuition, and therefore necessarily the case. But he said the same thing about 3-D spacetime, and he was just wrong. So why not simply respond that this intuition can denied without contradiction, and since the denial fits a very good physical theory, we should deny it? Specifically, why not just say that a particle can be wholly present in only one position in a single universe, but it can be wholly present in multiple positions in multiple universes?
1. For some reason the link to Sean Carroll's piece didn't post.
2. Kant’s intuition can be denied without contradiction, and the denial fits well with current physical theory. But I wouldn’t want to draw your conclusion that “why not just say that a particle can be wholly present in only one position in a single universe.” What fits best with physical theory is that a particle can be in multiple locations in a single universe and also in multiple locations across universes. I mentioned quantum mechanics in my blog post just to make the point that all experts agree that a particle can be in two locations at once in our own universe, despite the violation of common sense. You are right that the Schrödinger equation “tells us the probability that a particle is at any particular location,” but don’t assume from this that there is a definite location it has. Schrödinger abandoned the idea that a particle has a definite location in our universe. Niels Bohr’s Copenhagen Interpretation says a particle is at a definite location “to some extent,” meaning the particle is not wholly at any place when unobserved. The italics is what is special about a Copenhagen Interpretation. The Schrödinger equation tell us that particles are here and there at once; a particle is always in a “superposition” of here and there when it is not being observed. Bohr said, “No reality without observation!” I am not that much of an idealist and happen to believe the Copenhagen Interpretation is incorrect because I believe the wave function never collapses. This is the position of Hugh Everett. I was not promoting Everett’s Many Worlds Interpretation of quantum mechanics in my blog post, but it is a reasonable position, though still controversial. In the multiverse theory that I was promoting, the many parallel universes are far away; in the Many Worlds theory the universes are disconnected from our space and are neither near nor far. I think if Einstein were alive today he’d reject the Copenhagen Interpretation and go with the Many Worlds Interpretation.
3. Brad, thanks for that reply. There is a difference, don't you think, between the idea that a particle is not wholly at any place at one time and the idea that a particle is wholly at many places at once? The first formulation seems right to me, the second I have trouble grasping.
I like the Everett interpretation, too, especially because of the way it seems to take the mystery out of EPR, but I actually do not quite understand exactly how it interprets the Schrödinger equation. On the basis of what I know it seems to me that it is a kind of hidden variable theory in which what we don't know is, not the actual, definite location of the particle in a particular universe, but whether we occupy a universe in which the particle is definitely in this position or definitely in that one. In other words, on the Everett interpretation, there is no collapse of the wave function, so what the wave function tells us is the probability that we are in a certain kind of universe, but this seems compatible with the view that particles have definite locations in every universe.. Is this wrong-headed and can you shed anymore light on this? (I know it is not central to your post.)
4. Randy, in quantum mechanics when talking about point particles such as electrons in a single universe, I believe it’s not helpful to emphasize a difference between not being wholly at any place at one time and being wholly at many places at once. That’s because a particle has no definite location (when it is not measured, according to the Copenhagen Interpretation). Now I used this example, with its implicit endorsement of the Copenhagen Interpretation, in order to suggest that physicists have for a long time been willing to say a particle can be in two places at once within an atom. However, I don’t endorse the Copenhagen Interpretation myself and prefer the Everett Interpretation, which describes the world as you say in your comments. The Everett many-worlds interpretation is compatible with the view that particles have definite locations in every universe, as you say. And, as you say, this removes the mystery from the EPR paradox (although it adds in mystery in another way—by introducing the unintuitive concept of parallel universes). That is the very reason why I commented that Einstein would approve of this interpretation over the Copenhagen interpretation if he were alive today. However, the Everett interpretation is also compatible with the claim that a particle can have two locations—that the particle can be wholly present in two places, namely having locations in two universes. In the last ten years, the Copenhagen Interpretation has fallen out of favor.
2. Thanks Brad for the thoughtful and fun post. I always learn something from you when you talk physics. I am not competent to comment on the physics so I will accept whatever you say on that, for the sake of argument. But I think the physics is a smokescreen. I avoid any conceptual or theological issues surrounding the nature of Jesus, the death of any god, child-sacrifice, etc. These too are not relevant, just like whether death is a singular or permanent event for Jesus or anybody is also tangential. I see two problems. (1) I think there is an implication failure perhaps due to ambiguity. That the same person with counterparts in many (or even all) other worlds dies could be true, and yet it could be false that each one (or any one) dies many times. Possible worlds or parallel worlds talk doesn’t even support the notion that, in any world where Jesus exists, that Jesus even dies once in all of them. (2) Some kind of quantifier shift error threatens. The general point you raise, that some guy died many times, is possible, sort of like the universe could have undergone an eternal series of crunches and expansions is possible. However, this fails to support any particular Jesus-story or Big-Bang story. It does not entail that anybody actually died many times in any given world. So “(there exists a) Jesus (who) died more than once for our sins” could be false even if “all Jesuses die in any world where any Jesus exists”. Again the latter claim is false, because there is a possible world or parallel universe where he does not die, or some worlds where his counterpart does not. So I suspect that your presumption is false, but again I don’t get the physics: “Similarly, if you have enough finite universes, which are just patterns of elementary particles, and each has a finite number of possible quantum states, then every universe has an infinite number of duplicates.” OK, I am not sure what this proves, but isn’t it possible that you could have an infinite series of arrangements of particles (with a finite number of possible states) and never get any duplicates? To presume this, I think, is to presume that whatever is possible is inevitable. It was dubious for Nietzsche to assume this in his doctrine of Eternal Recurrence and it is also dubious to presume it here. In short, Jesus died (or really did not) in this world, once, and that’s it. If you want to run worlds in parallel or series, the problem of ambiguity remains. I do not presume that names (proper or otherwise) are rigid-designators, so this could be part of my problem with understanding your argument.
1. Scott, you’ve made some very interesting comments. I agree with some and disagree with some others.
You said, “isn’t it possible that you could have an infinite series of arrangements of particles (with a finite number of possible states) and never get any duplicates?” I believe the answer is “no.” You might shuffle a deck of cards an infinite number of times and still never get it back to its original order. But if you shuffle it a very large, finite number of times you are absolutely sure of getting two orderings that are the same. You don’t even need to shuffle it an infinite number of times.
You are right that it’s mathematically possible there won’t be any duplicates of our universe even if an actually infinite number of parallel universes are generated. However, if you start shuffling with the deck in a certain order, then as you shuffle more and more, the probability of getting it back to the original order gets higher and higher and approaches one in the limit of an infinite number of shuffles. This is an implication of a theorem in probability theory, assuming random shuffles. I’d bet my life that you’ll get the deck back to its original order eventually. So, to speak epistemologically, I’d say I know you’ll produce the duplicate. Ditto for there being multiple Jesuses in the multiverse, assuming random generation of parallel universes. The assumption I left out in my original posting was that the generation of parallel universes is random.
In most parallel universes there won’t be a Jesus, nor even homo sapiens. I hope you agree with me that solid historical evidence establishes that there was a Jesus in our universe about two millennia ago. In my blog I chose to talk about Jesus mere as an attention-getter. I could have made the same points talking about Abraham Lincoln.
3. Randy, here’s another thought about your recommendation that we say a particle is wholly present at one place in one universe. In the two-slit diffraction experiment, if you fire particles very slowly at a target, then we can show that the particle goes through both slits and interferes with “itself.” You wouldn't want to say the particle wasn’t “wholly present” when it went through the left slit. It's not like the particle's left half went through the left slit, right? But at the same time the particle was also going through the right slit, even though there was exactly one particle fired at the slits. So, the particle was wholly present in two places at once.
4. Brad, do we want to use the term 'particle' here? I thought the point here is that when, say, an electron interferes with itself it is because it is not behaving as a particle at all, but as a wave. If it were behaving as a particle, then, by definition, it would have to pass through one slit or the other, right?
Is there a difference between how Bohr and Everett account for this experiment? As I understand it, Bohr says that the act of measurement collapses the wave function, and forces the entity to behave as a particle. But on the Many Worlds interpretation there is no collapse of the wave function, so why the difference when it is measured and when it is not? Why doesn't the entity continue to behave as a wave?
5. Randy, you are raising deep issues about the philosophy of quantum mechanics. Yes, it is better not to use the word “particle” when we are talking about wave interference. However, even if we were to stick to the classical Copenhagen Interpretation, its principle of complementarity allows the two-slit experiment to be considered either as a wave or as a particle experiment but not at the same time.
So, let’s use the word “particle.” When we consider it as a particle experiment, then the particle must go through both slits simultaneously, yet hit the screen behind it at ONE place. It is wholly in two places at once as it goes through the screen. Richard Feynman said this is the essence of quantum weirdness.
You asked, “on the Many Worlds interpretation there is no collapse of the wave function, so why the difference when it is measured and when it is not? One answer is that measurement produces quantum decoherence.
You then asked, “Why doesn’t the entity continue to behave as a wave?” The answer is that it does.
My blog post is basically reporting on the views of Max Tegmark from his 2014 book Our Mathematical Universe. He is leader of the quantum decoherence idea. According to Tegmark, the reason why there is such a big difference between measured and unmeasured particles is quantum decoherence or mixing with lots of other particles, not intrusion of consciousness as the Copenhagen people believe.
According to Niels Bohr’s Copenhagen Interpretation of quantum mechanics, particles behave strangely only because they are unmeasured by a conscious being. In Schrödinger’s thought experiment in which there’s a 50-50 chance of the release of cyanide gas within ten minutes of the room being sealed, the Copenhagen people say Schrödinger’s cat is both alive and dead in the room because it is now ten minutes later and no conscious being has yet looked into the room and become aware of the situation. But then someone looks in and the cat is, say, still alive, and this looking collapses the wave function and that is why we never observe macroscopic objects in quantum superposition. Observing always causes collapse.
Wrong, says Tegmark. Consciousness is not what collapses the wave function. It never collapses, and consciousness is unimportant. Instead what is important about measurement removing the weirdness is that the object measured gets entangled with the many objects in the measuring tools. Seeing requires bouncing photon objects off the measured object, which destroys quantum superposition. This destruction via getting entangled with many other objects is what Tegmark calls quantum decoherence. According to Tegmark, the reason why we never see macroscopic objects such as cows and people in two places at once is not because they are macroscopic, and not because they are observed, as the Copenhagen Interpretation hypothesizes, but rather because it is too hard to isolate them from other particles and prevent decoherence. I do not understand this process very well, but the point seems to be that it is an object’s interaction with other particles that destroys its quantum superposition via the process of quantum decoherence.
But Tegmark’s interpretation of quantum mechanics is not yet standard, so in my original post I did not mention it.
6. Brad, thanks, that's very helpful. Charles Seife's book Decoding the Universe has an approving chapter on Tegmark's decoherence explanation. For him, the key conceptual move is to think of information as something that exists in the world, and measurement as the transfer of information from one place to another. Once you do that, there is no problem thinking of nature itself as constantly making measurements. Do you recommend Tegmark's book?
7. Randy, yes Tegmark's book is very clear and interesting. I'll have to take a look at Seife's book some day.
8. Brad: (This is Cliff, now in the La-La land of retirement.) Interesting as the multiple universe theory is, isn't it the case that there is no way to empirically confirm it? Can we make the jump from: Quantum theory is well-confirmed for our universe and quantum theory implies there are multiple universes, therefore there must be multiple universes? To me that's seems to stretch the idea of empirical confirmation too far.
9. Cliff, I worry about all those questions, too, and am not yet convinced of the claim that Jesus died many times for our sins. You have a fine pragmatic attitude toward the issue. Since we can’t make predictions about other universes, that is something to worry about.
If there were no way to empirically confirm the claim that there exist alternative universes, then I wouldn’t believe it either, but there are ways to empirically confirm it—indirectly—because it provides good explanations of observations even if it can’t provide predictions that can be tested. You would say this indirect confirmation is too indirect and it stretches the idea of empirical confirmation too far. Proponents of alternative universes say it is time for science to change and accept this stretching.
Here are some relevant quotations from Leonard Susskind in his 2006 book The Cosmic Landscape. “On the theoretical side, an outgrowth of inflationary theory called Eternal Inflation is demanding that the world be a megaverse, full of pocket universes that have bubbled up out of inflating space, like bubbles in an uncorked bottle of champagne." (p. 21)
He calls alternative universes “pocket universes,” and he calls the Level I Multiverse the “megaverse.”
"There is very little doubt that we are embedded in a vastly bigger megaverse." (21-2)
“But certainly the critics are correct that in practice, for the foreseeable future, we are stuck in our own pocket with no possibility of directly observing other ones. Like quark theory, the confirmation will not be direct and will rely on a great deal of theory.” (196)
As for rigid philosophical rules, it would be the height of stupidity to dismiss a possibility just because it breaks some philosophers’s dictum about falsifiability. …Just as general are always fight the last war, philosophers are always parsing the last scientific revolution.” (196) |
6ccf2773b149be6e | Morse potential
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
The Morse potential, named after physicist Philip M. Morse, is a convenient interatomic interaction model for the potential energy of a diatomic molecule. It is a better approximation for the vibrational structure of the molecule than the QHO (quantum harmonic oscillator) because it explicitly includes the effects of bond breaking, such as the existence of unbound states. It also accounts for the anharmonicity of real bonds and the non-zero transition probability for overtone and combination bands. The Morse potential can also be used to model other interactions such as the interaction between an atom and a surface. Due to its simplicity (only three fitting parameters), it is not used in modern spectroscopy. However, its mathematical form inspired the MLR (Morse/Long-range) potential, which is the most popular potential energy function used for fitting spectroscopic data.
Potential energy function[edit]
The Morse potential (blue) and harmonic oscillator potential (green). Unlike the energy levels of the harmonic oscillator potential, which are evenly spaced by ħω, the Morse potential level spacing decreases as the energy approaches the dissociation energy. The dissociation energy De is larger than the true energy required for dissociation D0 due to the zero point energy of the lowest (v = 0) vibrational level.
The Morse potential energy function is of the form
Here is the distance between the atoms, is the equilibrium bond distance, is the well depth (defined relative to the dissociated atoms), and controls the 'width' of the potential (the smaller is, the larger the well). The dissociation energy of the bond can be calculated by subtracting the zero point energy from the depth of the well. The force constant of the bond can be found by Taylor expansion of around to the second derivative of the potential energy function, from which it can be shown that the parameter, , is
where is the force constant at the minimum of the well.
Since the zero of potential energy is arbitrary, the equation for the Morse potential can be rewritten any number of ways by adding or subtracting a constant value. When it is used to model the atom-surface interaction, the energy zero can be redefined so that the Morse potential becomes
which is usually written as
where is now the coordinate perpendicular to the surface. This form approaches zero at infinite and equals at its minimum, i.e. . It clearly shows that the Morse potential is the combination of a short-range repulsion term (the former) and a long-range attractive term (the latter), analogous to the Lennard-Jones potential.
Vibrational states and energies[edit]
Like the quantum harmonic oscillator, the energies and eigenstates of the Morse potential can be found using operator methods.[1] One approach involves applying the factorization method to the Hamiltonian.
To write the stationary states on the Morse potential, i.e. solutions and of the following Schrödinger equation:
it is convenient to introduce the new variables:
Then, the Schrödinger equation takes the simple form:
Its eigenvalues and eigenstates can be written as:[2]
with [x] denoting the largest integer smaller than x.
where and is a generalized Laguerre polynomial:
There also exists the following important analytical expression for matrix elements of the coordinate operator (here it is assumed that and ) [3]
The eigenenergies in the initial variables have form:
where is the vibrational quantum number, and has units of frequency, and is mathematically related to the particle mass, , and the Morse constants via
Whereas the energy spacing between vibrational levels in the quantum harmonic oscillator is constant at , the energy between adjacent levels decreases with increasing in the Morse oscillator. Mathematically, the spacing of Morse levels is
This trend matches the anharmonicity found in real molecules. However, this equation fails above some value of where is calculated to be zero or negative. Specifically,
integer part.
This failure is due to the finite number of bound levels in the Morse potential, and some maximum that remains bound. For energies above , all the possible energy levels are allowed and the equation for is no longer valid.
Below , is a good approximation for the true vibrational structure in non-rotating diatomic molecules. In fact, the real molecular spectra are generally fit to the form1
in which the constants and can be directly related to the parameters for the Morse potential.
As is clear from dimensional analysis, for historical reasons the last equation uses spectroscopic notation in which represents a wavenumber obeying , and not an angular frequency given by .
Morse/Long-range potential[edit]
An important extension of the Morse potential that made the Morse form very useful for modern spectroscopy is the MLR (Morse/Long-range) potential.[4] The MLR potential is used as a standard for representing spectroscopic and/or virial data of diatomic molecules by a potential energy curve. It has been used on N2,[5] Ca2,[6] KLi,[7] MgH,[8][9][10] several electronic states of Li2,[4][11][12][13][9][12] Cs2,[14][15] Sr2,[16] ArXe,[9][17] LiCa,[18] LiNa,[19] Br2,[20] Mg2,[21] HF,[22][23] HCl,[22][23] HBr,[22][23] HI,[22][23] MgD,[8] Be2,[24] BeH,[25] and NaH.[26] More sophisticated versions are used for polyatomic molecules.
See also[edit]
• 1 CRC Handbook of chemistry and physics, Ed David R. Lide, 87th ed, Section 9, SPECTROSCOPIC CONSTANTS OF DIATOMIC MOLECULES pp. 9–82
• Morse, P. M. (1929). "Diatomic molecules according to the wave mechanics. II. Vibrational levels". Phys. Rev. 34. pp. 57–64. Bibcode:1929PhRv...34...57M. doi:10.1103/PhysRev.34.57.
• Girifalco, L. A.; Weizer, G. V. (1959). "Application of the Morse Potential Function to cubic metals". Phys. Rev. 114 (3). p. 687. Bibcode:1959PhRv..114..687G. doi:10.1103/PhysRev.114.687.
• Shore, Bruce W. (1973). "Comparison of matrix methods applied to the radial Schrödinger eigenvalue equation: The Morse potential". J. Chem. Phys. 59 (12). p. 6450. Bibcode:1973JChPh..59.6450S. doi:10.1063/1.1680025.
• Keyes, Robert W. (1975). "Bonding and antibonding potentials in group-IV semiconductors". Phys. Rev. Lett. 34 (21). pp. 1334–1337. Bibcode:1975PhRvL..34.1334K. doi:10.1103/PhysRevLett.34.1334.
• Lincoln, R. C.; Kilowad, K. M.; Ghate, P. B. (1967). "Morse-potential evaluation of second- and third-order elastic constants of some cubic metals". Phys. Rev. 157 (3). pp. 463–466. Bibcode:1967PhRv..157..463L. doi:10.1103/PhysRev.157.463.
• Dong, Shi-Hai; Lemus, R.; Frank, A. (2001). "Ladder operators for the Morse potential". Int. J. Quantum Chem. 86 (5). pp. 433–439. doi:10.1002/qua.10038.
• Zhou, Yaoqi; Karplus, Martin; Ball, Keith D.; Bery, R. Stephen (2002). "The distance fluctuation criterion for melting: Comparison of square-well and Morse Potential models for clusters and homopolymers". J. Chem. Phys. 116 (5). pp. 2323–2329. doi:10.1063/1.1426419.
• I.G. Kaplan, in Handbook of Molecular Physics and Quantum Chemistry, Wiley, 2003, p207.
1. ^ F. Cooper, A. Khare, U. Sukhatme, Supersymmetry in Quantum Mechanics, World Scientific, 2001, Table 4.1
2. ^ Dahl, J.P.; Springborg, M. (1988). "The Morse Oscillator in Position Space, Momentum Space, and Phase Space" (PDF). J. Chem. Phys. 88: 4535. Bibcode:1988JChPh..88.4535D. doi:10.1063/1.453761.
3. ^ E. F. Lima and J. E. M. Hornos, "Matrix Elements for the Morse Potential Under an External Field", J. Phys. B: At. Mol. Opt. Phys. 38, pp. 815-825 (2005)
4. ^ a b Le Roy, Robert J.; N. S. Dattani; J. A. Coxon; A. J. Ross; Patrick Crozet; C. Linton (25 November 2009). "Accurate analytic potentials for Li2(X) and Li2(A) from 2 to 90 Angstroms, and the radiative lifetime of Li(2p)". Journal of Chemical Physics. 131 (20): 204309. Bibcode:2009JChPh.131t4309L. doi:10.1063/1.3264688.
5. ^ Le Roy, R. J.; Y. Huang; C. Jary (2006). "An accurate analytic potential function for ground-state N2 from a direct-potential-fit analysis of spectroscopic data". Journal of Chemical Physics. 125 (16): 164310. Bibcode:2006JChPh.125p4310L. doi:10.1063/1.2354502.
6. ^ Le Roy, Robert J.; R. D. E. Henderson (2007). "A new potential function form incorporating extended long-range behaviour: application to ground-state Ca2". Molecular Physics. 105 (5–7): 663–677. Bibcode:2007MolPh.105..663L. doi:10.1080/00268970701241656.
7. ^ Salami, H.; A. J. Ross; P. Crozet; W. Jastrzebski; P. Kowalczyk; R. J. Le Roy (2007). "A full analytic potential energy curve for the a3Σ+ state of KLi from a limited vibrational data set". Journal of Chemical Physics. 126 (19): 194313. Bibcode:2007JChPh.126s4313S. doi:10.1063/1.2734973.
8. ^ a b Henderson, R. D. E.; A. Shayesteh; J. Tao; C. Haugen; P. F. Bernath; R. J. Le Roy (4 October 2013). "Accurate Analytic Potential and Born–Oppenheimer Breakdown Functions for MgH and MgD from a Direct-Potential-Fit Data Analysis". The Journal of Physical Chemistry A. 117 (50): 131028105904004. Bibcode:2013JPCA..11713373H. doi:10.1021/jp406680r.
9. ^ a b c Le Roy, R. J.; C. C. Haugen; J. Tao; H. Li (February 2011). "Long-range damping functions improve the short-range behaviour of 'MLR' potential energy functions" (PDF). Molecular Physics. 109 (3): 435–446. Bibcode:2011MolPh.109..435L. doi:10.1080/00268976.2010.527304.
10. ^ Shayesteh, A.; R. D. E. Henderson; R. J. Le Roy; P. F. Bernath (2007). "Ground State Potential Energy Curve and Dissociation Energy of MgH". The Journal of Physical Chemistry A. 111 (49): 12495–12505. Bibcode:2007JPCA..11112495S. doi:10.1021/jp075704a. PMID 18020428.
11. ^ Dattani, N. S.; R. J. Le Roy (8 May 2013). "A DPF data analysis yields accurate analytic potentials for Li2(a) and Li2(c) that incorporate 3-state mixing near the c-state asymptote". Journal of Molecular Spectroscopy (Special Issue). 268: 199–210. arXiv:1101.1361. Bibcode:2011JMoSp.268..199.. doi:10.1016/j.jms.2011.03.030.
12. ^ a b W. Gunton, M. Semczuk, N. S. Dattani, K. W. Madison, High resolution photoassociation spectroscopy of the 6Li2 A-state,
13. ^ Semczuk, M.; Li, X.; Gunton, W.; Haw, M.; Dattani, N. S.; Witz, J.; Mills, A. K.; Jones, D. J.; Madison, K. W. (2013). "High-resolution photoassociation spectroscopy of the 6Li2 c-state". Phys. Rev. A. 87. p. 052505. arXiv:1309.6662. Bibcode:2013PhRvA..87e2505S. doi:10.1103/PhysRevA.87.052505.
14. ^ Xie, F.; L. Li; D. Li; V. B. Sovkov; K. V. Minaev; V. S. Ivanov; A. M. Lyyra; S. Magnier (2011). "Joint analysis of the Cs2 a-state and 1 g (33Π1g ) states". Journal of Chemical Physics. 135 (2): 02403. Bibcode:2011JChPh.135b4303X. doi:10.1063/1.3606397.
15. ^ Coxon, J. A.; P. G. Hajigeorgiou (2010). "The ground X 1Σ+g electronic state of the cesium dimer: Application of a direct potential fitting procedure". Journal of Chemical Physics. 132 (9): 094105. Bibcode:2010JChPh.132i4105C. doi:10.1063/1.3319739.
16. ^ Stein, A.; H. Knockel; E. Tiemann (April 2010). "The 1S+1S asymptote of Sr2 studied by Fourier-transform spectroscopy". The European Physical Journal D. 57 (2): 171–177. arXiv:1001.2741. Bibcode:2010EPJD...57..171S. doi:10.1140/epjd/e2010-00058-y.
17. ^ Piticco, Lorena; F. Merkt; A. A. Cholewinski; F. R. W. McCourt; R. J. Le Roy (December 2010). "Rovibrational structure and potential energy function of the ground electronic state of ArXe". Journal of Molecular Spectroscopy. 264 (2): 83–93. Bibcode:2010JMoSp.264...83P. doi:10.1016/j.jms.2010.08.007.
18. ^ Ivanova, Milena; A. Stein; A. Pashov; A. V. Stolyarov; H. Knockel; E. Tiemann (2011). "The X2Σ+ state of LiCa studied by Fourier-transform spectroscopy". Journal of Chemical Physics. 135 (17): 174303. Bibcode:2011JChPh.135q4303I. doi:10.1063/1.3652755.
19. ^ Steinke, M.; H. Knockel; E. Tiemann (27 April 2012). "X-state of LiNa studied by Fourier-transform spectroscopy". Physical Review A. 85 (4): 042720. Bibcode:2012PhRvA..85d2720S. doi:10.1103/PhysRevA.85.042720.
20. ^ Yukiya, T.; N. Nishimiya; Y. Samejima; K. Yamaguchi; M. Suzuki; C. D. Boonec; I. Ozier; R. J. Le Roy (January 2013). "Direct-potential-fit analysis for the system of Br2". Journal of Molecular Spectroscopy. 283: 32–43. Bibcode:2013JMoSp.283...32Y. doi:10.1016/j.jms.2012.12.006.
21. ^ Knockel, H.; S. Ruhmann; E. Tiemann (2013). "The X-state of Mg2 studied by Fourier-transform spectroscopy". Journal of Chemical Physics. 138 (9): 094303. Bibcode:2013JChPh.138i4303K. doi:10.1063/1.4792725.
22. ^ a b c d Li, Gang; I. E. Gordon; P. G. Hajigeorgiou; J. A. Coxon; L. S. Rothman (July 2013). "Reference spectroscopic data for hydrogen halides, Part II:The line lists". Journal of Quantitative Spectroscopy & Radiative Transfer. 130: 284–295. Bibcode:2013JQSRT.130..284L. doi:10.1016/j.jqsrt.2013.07.019.
23. ^ a b c d "Improved direct potential fit analyses for the ground electronic states of the hydrogen halides: HF/DF/TF, HCl/DCl/TCl, HBr/DBr/TBr and HI/DI/TI". Journal of Quantitative Spectroscopy and Radiative Transfer. 151: 133–154. Bibcode:2015JQSRT.151..133C. doi:10.1016/j.jqsrt.2014.08.028.
24. ^
25. ^ "Beryllium monohydride (BeH): Where we are now, after 86 years of spectroscopy". Journal of Molecular Spectroscopy. 311: 76–83. arXiv:1408.3301. Bibcode:2015JMoSp.311...76D. doi:10.1016/j.jms.2014.09.005.
26. ^ |
8d59acaecb446fa3 | Quantum Dynamics of Morphing
Psy ~ Trance ~ Formations
The Quantum Century
December 2001
Morphing fields of possibility
The Quantum Seeds of Revolution and Resonance
Lets start from the beginning. The era of quantum theory kicked off in 1900 with a discovery made by Max Plank. Plank was studying the so-called black body radiation problem. Classical physics predicted that black bodies should glow bright blue, a stark contradiction to the experience of steelworkers everywhere. In order to simplify the mathematical calculations, Plank restricted the vibration of the matter particles according to the following rule: E = nhf, where E is the particle’s energy, n is any integer, f is the frequency of vibration, and h is a constant chosen by Plank. This rule restricts the particles to energies that are certain multiples of their vibration frequency. Plank’s intention was to let h approach zero; however, this only predicted the same blue radiation as before. By chance, Plank discovered that if he set h to a certain value, the calculations matched the experimental results exactly. This special value for h is now known as Plank’s constant and is also called the “quantum of action.” Plank showed that energy can only be emitted and absorbed in tiny packets. Each packet of energy became known as a quanta, or quantum.
In 1905, Einstein produced three major publications that revolutionized the world of physics. The first of these papers proposed a theory in which a beam of light behaves like a shower of tiny particles. Picking up where Plank left off, Einstein showed that energy is not only absorbed and emitted in quantas, but energy itself comes in discrete quantum packets. Einstein demonstrated his theory by explaining the photoelectric effect, light’s ability to knock electrons out of metal. The fact that individual electrons could be detected as they were knocked off a metal surface seemed to imply that light was behaving like a particle. Moreover, reducing the intensity of the light beam did not effect the energy of the ejected electron. On the other hand, the energy of the ejected electron could be effected by changing the frequency of the light. Einstein proposed that these light particles, called photons, come in packets, each with energy given by Plank’s expression: E = hf, where h is Plank’s constant, and f is the light’s frequency. This formula predicts that photons of high-frequency light have more energy than photons of low-frequency light.
Einstein’s discovery was completely contradictory to the previously held scientific theories of electromagnetic radiation. In 1864, Clark Maxwell formalized the basic equations that govern electricity and magnetism, which are both now known to be aspects of a single entity we call the electromagnetic field. According to Maxwell’s theory, light is a wave. In other words, light is an electromagnetic vibration at a particular frequency. The electromagnetic field is actually the spectrum of all possible frequencies of light. In fact, the visible light we perceive with our eyes is a tiny fraction of this spectrum. Maxwell’s theory predicted the existence of light waves at lower and higher frequency than visible light. Shortly thereafter, radio waves were discovered, as were X-rays, infrared waves, ultraviolet waves, microwaves, and gamma rays. These different types of waves are just different names for light at various wavelengths. On the other hand however, Einstein’s theory demonstrated that light behaves like a particle.
Further evidence supporting Einstein’s quantum theory of light came in 1923 when Arthur Compton made an important discovery. Compton’s experiment involved shining a beam of X-rays into a gas of loosely bound electrons. Compton showed that X-rays behave like particles, which bounce off the electron. Both the X-ray and the electron scatter at specific angles like two billiard balls colliding. He also formulated an expression for the momentum p of the light particle given by the following expression: p = hk, where h is plank’s constant, and k is the light’s spatial frequency. Surprisingly, in 1914 the Bragg brothers had used crystal diffraction to show that X-rays behaved like waves. This type of experiment is known as Bragg scattering. Physicists at this time were confronted with contradictory evidence which suggested that light behaves both like a particle and a wave!
The plot thickened even more in 1924 when Louis de Broglie proposed that every particle of matter was associated with a wave. De Broglie reached this conclusion by using Einstein’s two equations for energy: E = mc 2 and E = hf. De Broglie claimed that the wavelength of a matter-wave is given by the expression λ = h / P, where h is again Plank’s constant, and P is the momentum of the particle. De Broglie’s outlandish idea, that matter is actually a wave, was soon proven experimentally to be correct.
In the classical view of physical reality, there was no way to reconcile the differences between waves and particles. A wave can spread out over a large area, be split up in an infinite number of waves, and two waves can interpenetrate and emerge unchanged. On the other hand, a particle is located in a tiny region, travels in one direction, and crashes into other particles. Although waves and particles appear to be contradictory aspects of reality, we have discovered that all waves are also particles, and all particles are also waves.
In order to further illustrate this peculiar wave/particle coexistence, let’s briefly consider a simple type of quantum experiment. Imagine we have an electron gun, a device that produces a beam of electrons. Also, our experiment will include a phosphor screen. If an individual phosphor in the screen is struck by an electron, the phosphor gains a little energy and immediately returns to its ground state by emitting a photon of light. Firing the electron gun at the phosphor screen produces a point of light on the screen. In this way, we can easily observe the particle nature of the electron.
Next, in between the gun and the screen, let’s place a card that has a small hole in the center. If our hole is sufficiently small enough, we will observe very different pattern than before. The image on the screen is no longer a point of light, but a series of bright and dark concentric rings resembling a bull’s eye target. This pattern is caused by wave diffraction, and the light and dark rings are caused by wave interference. Interference is an inherent property of all wavelike interactions. If two waves come together that are completely in phase, the resulting wave has an amplitude which is the sum of the two original wave amplitudes. If the two waves are completely out of phase, the original waves simply cancel each other out. In general when waves meet, their amplitudes add. This rule is known as the wave superposition principle, and it applies to all types of waves.
The bull’s eye pattern on the phosphor screen clearly demonstrates the wavelike nature of the electron. This pattern is created by a large number of electrons, which individually look like little points of light on the screen. That is, each electron is observed only as a tiny flash of light, but after a large number of electrons have hit the screen, the pattern of the bull’s eye emerges. This can be demonstrated in the following way. If we lower the intensity of the beam, such that only one electron can pass through the hole at a time, we would be able to observe each electron hit the phosphor screen. The exact location of each impact is completely unpredictable. However, if we use a photographic plate to record each impact, and allow the system to continue firing one electron at an interval of say one every ten minutes, then, when we observe the plate later on, we will see the same bull’s eye pattern as before. This experiment seems to imply that although it appears on the screen as a particle, each electron by itself travels from the gun to the screen as though it were a wave. It should be noted that this type of experiment could have been done using any type of charged particle, or any frequency of light such as infrared or X-rays.
Entities, Attributes, Waveforms, and other Finite Fields of Possibility
Quantum theory is the method that has been developed to analyze experiments such as the one outlined above. This theory was created to deal with tiny creatures such as atoms, electrons, and photons. However, quantum theory has also proved successful in dealing with the atomic nucleus as well as subatomic particles such as quarks, gluons, and leptons. In principal, quantum theory also applies to the macroscopic world which we inhabit, as well as large scale astronomical entities such as galaxies and black holes. To date, quantum theory has successfully predicted the results of every experiment the human mind can devise. However, the predictive strategy of quantum theory is quite different than classical mechanics in one fundamental way: quantum theory cannot predict what will happen in a measurement situation, it can only predict the statistical probabilities of how likely an event is to occur. For any quantum entity, quantum theory predicts the probability of each possible value of a specific physical attribute. Depending on the nature of the measurement situation, a quantum entity may demonstrate many different types of attributes. Quantum theory does not say anything about what happens when the quantum entity is not being measured.
First of all, let’s discuss what we mean by quantum entity, and how the theory addresses these entities. A quantum entity is any thing, regardless of its size, which exhibits both wave and particle characteristics. Usually, a quantum entity will demonstrate either particle nature or wave nature depending on the type of measurement it is subjected to. A typical quantum entity could be a photon, an atom, or an electron; but a human, a planet, or the entire universe could be considered a quantum entity as well. In this project, we will refer to a quantum entity as a quon for simplicity. Instead of dealing with quantum entities specifically, quantum theory represents the quon with a mathematical device called the wave function, usually labeled Ψ or psi. The first step in any quantum experiment is to associate a particular wave function to the relevant quantum entity.
In most ways, the wave function, Ψ, is just like any other wave we are familiar with. Before discussing quantum waves, let’s take a look at waves in general. A wave is typically characterized by qualities known as amplitude, wavelength, frequency, and phase. The amplitude of a wave is a measure of the deviation from its rest state. In general, the amplitude is the maximum height of a wave. If the wave is cyclic, then the wavelength is the space spanned by one cycle. The length in time of one cycle is called the period. The number of complete cycles in a certain interval of time is called the temporal frequency. The number of complete cycles in a certain interval of space is called the spatial frequency. The phase of a point in a cyclic wave is a measure of how far into a cycle that point is located.
As mentioned earlier, all waves obey the superposition principle, which states: when two waves meet, their amplitudes add. After the two waves move through each other, each wave retains its respective amplitude, and is thus unchanged by the temporary superposition. As we shall see later, any two waves can interact and depart each other’s company with their respective amplitudes intact, but the phases of the these two waves become entangled, and are thus phase correlated for the rest of eternity. When any two waves meet, the superposition of amplitudes depends on the phases of the each wave. This is characterized by constructive and destructive interference. For example, if two waves, each with amplitude of one, meet each other completely in phase, the resulting amplitude is two. If two waves, each with amplitude of one, meet each other completely out of phase, the resulting amplitude is zero. If two waves, each with amplitude of one, meet each other at arbitrary phases, the resulting amplitude will be between zero and two. Quantum waves have all the characteristics of ordinary waves that have been outlined above.
In general, the energy of a wave is a measure of intensity, and is given by the square of the amplitude. For example, if you double a wave’s amplitude you quadruple the wave’s energy. Quantum waves are different from ordinary waves in one important way. Quantum waves do not have energy. Instead, the square of the amplitude is a measure of probability. This idea lies at the heart of how quantum theory works. To predict the results of an experiment, we must find the amplitudes of each possible value of the attribute we are measuring, and then we square the amplitudes to get a probability distribution which indicates how likely each possibility is to occur.
Before we can make any type of measurement, we must first decide what attribute we want to measure. In general, quantum entities have two kinds of attributes: static and dynamic. The static attributes of an elementary quantum entity always have the same value. The major static attributes are mass (M), charge (Q), and spin magnitude (S). The values for the dynamic attributes of a quantum entity change over time. The major dynamic attributes are position (X), momentum (P), energy (E), and spin orientation (S z ). Before we can understand how quantum theory represents the dynamic attributes of a quantum entity, we must first discuss some more basic properties of waves in general.
Early in the 1800’s, a man named Joseph Fourier developed a new language which could be used to express any type of wave. Fourier showed that any wave could be decomposed into a unique recipe of sine waves. Each sine wave has a particular value of frequency k, amplitude a, and phase p. The process of breaking any wave up into a bunch of sine waves is known as Fourier analysis. Conversely, any wave can be constructed by putting together a bunch of sine waves, a process known as Fourier synthesis.
Sine waves represent one type of waveform family. Another type of waveform family is the impulse family. An impulse wave is an infinitely narrow spike located at a specific location. Just as Fourier showed that any wave could be broken up into sine waves, the same wave could be broken up into impulse waves. The basis of digital electronic music is that any wave can be constructed by putting together a bunch of these impulse waves.
The sine waveform family and the impulse waveform family are just two examples of waveform families; in fact, there are an infinite number of waveform families. Any imaginable wave can be decomposed into a unique recipe of particular members of any type of waveform family. This idea is sometimes called the synthesizer theorem. Any wave can be expressed as a unique sum of members from any particular waveform family. This means that any wave can be taken apart in an infinite number of ways, depending on which waveform family we choose to use. Conversely, if we choose a particular waveform family, we can create any wave imaginable.
Quantum theory makes use of this so-called synthesizer theorem in a peculiar way. Quantum theory represents each dynamic attribute with a particular waveform family. In other words, every possible waveform family corresponds to some dynamic attribute of the quantum entity. The individual members of each waveform family represent different physical values of the dynamic attribute. To illustrate, let’s give a few well-known examples.
First of all, the position attribute is associated with the impulse waveform family. Each individual impulse wave is a narrow spike, characterized by a value x which describes the position of that particular impulse wave. Each possible position attribute value, X, of a quon is associated with the location of a specific impulse wave at position x. The momentum attribute is associated with the spatial sine waveform family. Each member of this waveform family is characterized by a specific value for spatial frequency, k. A specific momentum value, P, corresponds to each member of the spatial sine waveform family according to the following rule: P = hk, where h is Plank’s constant. The energy attribute is associated with the temporal sine waveform family. Each individual member of this family is characterized by a specific value, f, which represents temporal frequency. The energy value associated with each individual wave in this family is given by the following rule: E = hf, where h is again Plank’s constant. The relationships for momentum and energy are just de Broglie’s law for the matter-wave wavelengths, and Einstein’s relation for the energy of a quantum of light. The waveform family associated with the spin orientation attribute is known as the spherical harmonic family. Each member of this family is distinguished by two values: m and n, such that m and n are both positive integers. The spin orientation value, S z ,in the polar direction is given by the follow rule: S z = m 2 / (n 2 + n).
Quantum theory works by associating each dynamic attribute with a particular waveform family. The relationship between the values of an attribute and the individual members of a particular waveform family is given by a rule, which can be quite simple in some cases, or very complicated in others. For the most part, physicists are concerned with the major dynamic attributes, which have been described above; however, there are an infinite number of different dynamic attributes since there are infinitely many waveform families.
A specific waveform family has special types of relationships with other waveform families. To understand these relationships, we must first introduce some terminology. By the synthesizer theorem, we know that any arbitrary wave can be broken up into different sets of component waves, depending on which waveform family we choose. Breaking up an arbitrary wave into component waves is analogous to putting the original wave through a prism. For example, Newton showed that white light could be passed through a prism to yield a rainbow of colors, known as the spectrum of visible light.
If we analyze an arbitrary wave with different waveform prisms, we will discover that some prisms break the wave into a small number of components while some prisms break the wave into a large number of components. The number of waveform components which a prism spits a wave is known as wave’s spectral width, or bandwidth. If a particular waveform prism breaks an arbitrary wave into a small bandwidth of components, we could say that the waveform family is similar to the original wave. If a waveform prism produces a large bandwidth of components, we could say that the waveform family is not similar to the original wave. If we take an arbitrary wave and put it through its own family prism, the resulting bandwidth will consist of only one wave component, which is the minimum spectral width. For example, if we put any sine wave through a sine waveform prism, the result will yield only one wave, which is exactly the original sine wave.We will refer to this prism, which does not split the original wave at all, as the kin prism. For any arbitrary wave, there exists such a kin prism, which does not decompose the wave into any components expect for itself. Conversely, for any arbitrary wave, there exists a particular waveform prism, which breaks the original wave into the largest possible bandwidth. This is to say that for any wave, there exists a waveform family which resembles the original wave the least. We will refer to this prism, which yields the maximum spectral width, as the conjugate prism. Thus, every wave belongs to a unique waveform family, and every waveform family bears a special relationship to a unique conjugate waveform family. An example of such a conjugate relationship is found between the sine waveform family and the impulse waveform family. Because of their mutual relationship to an arbitrary wave, we could say that these two waveform families are conjugate to each other.
To illustrate this relationship between a prism and its conjugate prism, let‘s consider the following experiment. Imagine we have identified two conjugate waveform families, called A and Z. Fist, take any arbitrary wave X, and analyze this wave by using the A prism. The result will be a particular bandwidth ΔA of output waveforms. If we analyze X by using the Z prism, we will get a bandwidth ΔZ of output waveforms. Because A and Z are conjugate waveform families, if X is very similar to A, then X will not be very similar to Z. Conversely, if X is very similar to Z, then X will not be very similar to A. Consequently, there exists a limit on how small both bandwidths of A and Z can get for the same input wave. This limit is usually expressed by the following relation: ΔA ● ΔZ ³ C, where A and Z are conjugate waveform families, and C is some positive constant. We will refer to this relationship as the spectral area code. The spectral area code is a fundamental feature of all waves, including quantum waves.
In the above example A and Z are as dissimilar as two waveform families can be. Now suppose we have chosen another waveform family K. Let’s assume that K is not very similar to A, but K is more similar to A than Z is. If we analyze the original wave X using the A and K prisms, there will still be a limit on how small both bandwidths of A and K can get for the same input. This can be expressed in the same way as above such that ΔA ● ΔK ³ C’, where C’ is another constant. However, since A and K are more similar than A and Z, the constant C’ will be less than C. If we were to use two waveform prisms that are very similar such as A and B, the spectral area code may yield a resolving limit that is close to zero. In other words, if two waveform families are very similar, there is no limit on how small both bandwidths can be for the same input wave. On the other hand, if two waveform families are strikingly different in character, the spectral area code limits the product of the two spectral widths. In this case, a small resulting bandwidth from one prism means that the resulting bandwidth of the other prism is huge.
In quantum theory, every dynamic attribute is represented by a particular waveform family and a specific rule, which translates how individual members of the family correspond to particular values of the physical attribute. As a direct consequence of the spectral area code, every conceivable dynamic attribute bears special relationships to other particular types of dynamic attributes. Each dynamic attribute has a conjugate attribute in the same sense that each waveform family has a conjugate family. In general, if two dynamic attributes are related in this way, such that the spectral area code applies, we could say that each attribute is conjugate to the other.
We noted earlier that the sine family and the impulse family are conjugates. We also know that the sine family can be associated with the momentum attribute of a quon , and the impulse family can be associated with the position attribute of a quon. The spectral area code can be translated into an expression for the physical dynamic attributes of position and momentum in the following way: ΔX ● ΔP ³ h. Here, ΔX represents the uncertainty in our measurement of the position attribute, ΔP represents the uncertainty in our measurement of the momentum attribute, and h is Plank’s constant. This relationship is commonly known as the Heisenberg uncertainty principle. The result of this relation is that we can know either position or momentum with perfect accuracy; however, since position and momentum are conjugate attributes, we can not define both attributes at the same time with perfect accuracy. In other words, if we know the exact value of one of these attributes, the value of the other attribute becomes maximally uncertain. It is possible that two dynamic attributes are independent of each other, in which case we can know the values of both simultaneously with perfect accuracy. The uncertainty principle applies to dynamic attributes which are not independent. The word independent is not really used here in any rigorously defined manner; however, we shall soon see that the condition which determines whether the uncertainty principle applies actually boils down to the commutative properties of specific matrices.
Heisenberg’s uncertainty principle directly implies that the assumptions of classical physics were incredibly naïve. Before Quantum theory, physics was based on the formulation of deterministic physical laws, which could be used to predict the exact outcome of any system. In general, classical systems were represented by relationships in phase space. Every particle, or object, in phase space is characterized by a definite position and momentum. Assuming that one knows all the laws which govern a system, as well as the position and momentum values of a particle in such a system, one should be able to predict exactly how the system will change with time. This ideal formed the basis of classical physics and inspired the conception of a universe that operates like a giant deterministic machine. However, according the scientific discoveries of quantum theory in the early twentieth century, it is impossible to know the exact value of an object’s position and momentum at the same time. Thus, Heisenberg’s uncertainty principle delivered a fatal blow to the antiquated conception of long term predictive determinism in physical systems. In general, quantum theory does not predict the result of a measurement on a physical system at all; however, quantum theory predicts the probability of each possibility in the quantum system.
Classical physics also assumed that all objects have inherent definite attributes which exist independently of the observation of those attributes. As we will see, the structure of quantum theory implies that the attributes of any aspect of reality are inseparable from the observation of those attributes. In fact, it is impossible to say for sure that something possesses any type of attribute whatsoever outside the context of some measurement situation.
Quantum Theoretical Foundations of Morphing Psy-waves in the N + 1 Dimension
Before we delve any deeper into this mysterious theory, let’s review the basics so far. Quantum theory represents all quantum systems with a wave function, which we call Ψ. This wave function is not only determined by the quantum entity in question, but by the type of attribute we wish to observe as well as the measurement situation we have designed to detect such attribute values. For simplicity, we could say that Ψ is determined by the entire measurement situation. Granted, this description is vague, but it sufficiently expresses the fact that there can be no separation between the observer and the observed. The Ψ-wave represents all possibilities of the quantum system. Choosing a specific attribute to measure is analogous to choosing a waveform family prism which analyzes the Ψ-wave into component waves. Each component wave represents a possible value of the attribute we are measuring. Moreover, each component wave has a particular amplitude and phase. In other words, each possibility is assigned a specific coordinate value that represents the amplitude and phase of that possibility. The square of the amplitude at each possibility gives the probability that a particular attribute value will be observed if we were to actually make a measurement.
The first mathematical version of quantum theory was developed by Werner Heisenberg in 1925. In Heisenberg’s model, a quantum system is represented by a set of matrices. Each matrix represents a specific dynamic attribute such as position, momentum, or energy. The probability that a system has a particular attribute value is determined by the diagonal entries of the matrix. An important property of matrices is that many types of matrices do not commute when they are multiplied together. If two attribute matrices don’t commute, then the measurement of these attributes is limited by the uncertainty principle. The progression of the quantum state in time is represented mathematically by certain laws of motion expressed using matrices. This first version of quantum theory is usually known as Heisenberg’s matrix mechanics.
A few months after Heisenberg’s theory was created, another physicist, named Erwin Schrödinger, introduced a different version of quantum theory. Schrödinger created a wave equation, which represents the evolution of a quantum system over time. The quantum state of a system at any instant is represented by a certain field of possibilities, Ψ, such that each possibility has a certain probability of occurring. As the quantum system evolves, the amplitudes of the Ψ-wave change continuously according to Schrödinger’s wave equation. The time dependent Schrödinger equation is usually written in the following way: - (h / 2π i) d/dt Ψ(x, t) = ĤΨ(x, t). In this expression, x is vector whose component values represent all possible values of any attribute X, and Ĥ is the Hamiltonian. The Hamiltonian is a linear operator that represents the total energy of the system. An operator is a mathematical device that transforms a given function into some other function according to a certain rule. In the case of Schrödinger equation, the time dependent Hamiltonian operator Ĥis equal to - (h / 2π i) d/dt. Without being too technically specific, the important thing is that Schrödinger’s wave equation defines a rule Ĥ, which describes how the Ψ-wave changes over time.
At about the same time as Schrödinger proposed his theory of wave mechanics, a third quantum theory was developed by Paul Dirac. This theory was rigorously formalized a few years later by the world famous mathematician John von Neumann. Dirac showed that the fundamental ideas of quantum theory can be represented in abstract mathematical terms by placing the theory in what is called Hilbert space. Dirac also showed that both Heisenberg’s and Schrödinger’s theories are special cases of his own Hilbert space version of quantum theory. Dirac’s theory is a mathematical formulation that resembles our previous description of quantum theory, which we described solely in terms of waveform families and spectrums.
Hilbert space is not geometrical, but is an abstract way of organizing functions. Although it is of little relevance to the goals of this project, we will present the conditions which define Hilbert space. Hilbert space is a vector space on which an inner product is defined, and which is complete, i.e., which is such that any Cauchy sequence of vectors in the space converge to a vector in the space. This abstract function space provides a natural reference frame for analyzing the wave function Ψ.
To illustrate the idea of Hilbert space and how it applies to quantum theory, let’s take a general example. Imagine we have a quantum system, which is composed of a quon, and a measuring device that is designed to observe a particular dynamic attribute A of the quon. The attribute, or observable, we choose to measure is represented mathematically by a linear operator, which we can label Â. This linear operator is analogous to the waveform family prism we used earlier. Each possible value of our attribute A is represented by a dimension in Hilbert space, which we will call a basic ray. Mathematically, we could also say that each dimension represents an eigenfunction of the operator ÂGenerally speaking, there are as many basic rays as there are possible values for an attribute. In three-dimensional Euclidean space, each dimension is at right angles to the others. Similarly, each dimension, or eigenfunction, in Hilbert space is perpendicular, or orthonormal, to the other dimensions. If our attribute has two possible values, the corresponding Hilbert space will consist of two dimensions. If our attribute is real valued along a continuum, the corresponding Hilbert space will consist of an uncountably infinite number of dimensions. The reference frame in Hilbert space is determined by the possible values of the attribute we have chosen to measure.
The wave function, Ψ, is represented by a vector in Hilbert space. This vector, which we will call the quantum ray, is simply a direction, which passes through the origin of our given coordinate frame of reference in Hilbert space. The quantum ray, Ψ, represents one quantum state of the system which is being analyzed. Given our particular reference frame, the wave function assigns a specific coordinate value, or point, to each basic ray. This coordinate point of each dimension is just the projection of the quantum ray onto each single basic ray. However, the coordinate value is not a point on a real line, but is a point on the complex plane. Each coordinate value is represented by a 2-dimensional complex vector which, if defined in exponential form, can be written in the following way: z = re if , such that r is the length, or magnitude, of the vector, and f is the angle of the vector. The magnitude of a given coordinate value represents the amplitude of the wave function at a specific possibility. The angle of a given coordinate value represents the relative phase of the wave function at a specific possibility. Thus, for each possibility, also called a basic ray, the wave function assigns a coordinate value, which is a complex vector that represents a specific amplitude and phase.
Each dimension, or basic ray, of Hilbert space, is associated with its own complex plane. The projection of the Ψ-wave onto each basic ray is given by a specific complex vector. If we let c i stand for the coordinate value along a particular basic ray, and we let Φ i stand for a particular eigenfunction, or basic ray, then we can express Ψ in the following way: Ψ(x) = c i Φ i (x), for all i. The Ψ-wave, or quantum ray, is just the sum of all these coordinate values, or complex vectors. Thus, Ψcan either be represented as a single entity such as a vector in Hilbert space, or by a collection of vectors in the complex plane such that each vector represents a specific possibility. The quantum wave is a field of possibilities.
Each possibility is characterized by an amplitude and a phase. To find the probability that a particular possibility will occur, we simply take the square of the amplitude. If c i represents a particular complex vector, then the square of the amplitude can be expressed as ||c i || 2 . In order for our probability measure to make any sense, we must normalize the quantum ray, Ψ, such that ||c i || 2 = ∫ ||Ψ(x)|| 2 dx = 1. This means that the sum of all the probabilities is equal to 1.
As noted above, the quantum wave function is a vector, in Hilbert space, which represents one quantum state. Without actually observing the measurement situation, we can ask how the Ψ-wave might change over time. For simplicity, lets assume there is only one dimension of time and that it always travels in the same direction. If the original Ψ-wave is calculated at time t o , then the Ψ-wave at time t 1 will be represented by a vector in Hilbert space which is different than the original vector. If we assume that time is a continuum, we can show that the quantum vector changes its orientation continuously. Thus, our spinning quantum vector, in Hilbert space, represents the continuously morphing Ψ-wave. Therefore, Schrödinger’s equation, which describes our spinning vector, is actually a mathematical representation of a morphing field of possibilities. As the quantum wave moves and changes direction, the magnitudes and the relative phases of all the coordinate values also change. Note, if the quantum vector travels continuously in Hilbert space, then each projection, which determines the possibility amplitude, also changes continuously. The probability distribution for each quantum vector also changes continuously because the probabilities are the squares of the continuously changing amplitudes.
It is important to remember that although this theory can be used effectively to determine the probability distribution for the attribute values we are concerned with, these potential tendencies to exist are not inherent in the representation of the quantum entity. Unless we first assume a frame of reference, such as a measurement situation designed to observe the value of a specific dynamic attribute, the quantum entity is simply a wave of infinite possibilities. The frame of reference in Hilbert space is created based on which attribute we choose to measure. Only after we have chosen an attribute, can we analyze, or decompose, the quantum wave into complex projections along the orthonormal basic rays.
In order to calculate the probability distribution for the possible values of a quantum entity, we must first create the concept of an attribute, which we want to measure. The word create is used here because, in some sense, any type of conceptual attribute such as position, momentum, energy, and spin is a construct of the imagination. In other words, if we want to measure position we must create a unit measure of distance. Likewise, if we want to measure momentum, we must create a unit measure of time as well as direction. In general, these units have been created out of thin air, and bear no real connection to nature.
Without a reference frame of observation, it is meaningless to say that the quantum entity possess any attribute whatsoever, let alone values for that attribute. Perhaps this claim is too far out to except right off; however, it at least appears safe to say that the internal structure of quantum theory implies that the attributes of any aspect of reality are inseparable from the observation of those attributes. For example, a quon does not inherently possess what we call momentum; however, given a certain measurement situation, the quon will demonstrate the appearance of momentum. In other words, momentum does not belong to the quantum entity itself, but to our interaction, or relationship, with the quantum entity. In a similar sense, a quantum entity is neither a wave nor a particle, but if we interact with the quantum entity, it will express itself either as a wave or a particle. This point of view is not without opposition. For instance, many physicists still believe that the quantum entity is an ordinary object, which exists whether it is being observed or not. Although there is a generally accepted recipe, or method, for using quantum theory, there is not much agreement amongst physicists concerning how quantum theory is actually connected to what we call reality. Before we turn to the various interpretations of quantum theory, let’s consider a fourth version of quantum theory.
In 1948, Richard Feynman developed a method for calculating a quon’s wave function which is called the sum-over-histories approach. We’ve already seen that the Ψ-wave represents all the possibilities open to a given quantum system. To get more perspective, let’s consider the quon gun and phosphor screen experiment that was described earlier. Between the quon gun and the screen, the unmeasured quon behaves like a wave of possibilities. Feynman’s idea was that what actually happens on the screen is influenced by everything that could have happened. Feynman’s approach to calculate Ψis to sum over the amplitudes of all possible ways a quon can get from the quon gun to the screen. Feynman describes the unmeasured world by making two postulates: a quon takes all possible paths, and no path is better than another. He also proposes that every path open to the quon has the same amplitude, and that each path differs from other paths only in its phase. In quantum theory, possibilities have a wavelike nature. Therefore, certain possibilities can cancel if they have different phases. Feynman showed that summing up all possible paths, or histories, produces the same wave function as solving Schrödinger’s equation.
In the context of our phosphor screen experiment, quantum theory implies that just before a flash is made on the screen we should not imagine that a tiny quon is actually heading for one particular phosphor molecule. Before the measurement occurs, the quon is heading in all possible directions at the same time. According to this view, an unmeasured quon exists only as a bunch of unrealized quantum potentialities. However, every time we make a measurement, only one of these possibilities becomes an actuality.
The Battlefield of Quantum Speculation and The Meaning of It All
The early interpretations of quantum theory reconciled this peculiar phenomenon by assuming that the world is divided into two separate parts. The unmeasured world, it was assumed, consists only of quantum potentials. On the other hand, the measured world consists only of classical type actualities. This interpretation of quantum theory was primarily advanced by Niels Bohr and Werner Heisenberg and is known generally as the Copenhagen interpretation. In addition to the view of distinct measured and unmeasured aspects of reality, the Copenhagen interpretation asserts two other fundamental assumptions. Firstly, Copenhagenists assume that there is no reality in the absence of observation. Secondly, this interpretation asserts that observation creates reality.
These two assertions are based on the idea that the dynamic attributes of a quon are contextual in the sense that the attribute values are determined by which attribute we choose to measure. For example, the position attribute of a quon is jointly determined by the quon and the measuring device. If we take away the measuring device, we also take away the position attribute of the quon. If we change the measurement context, then we also change the attributes of the quon. A Copenhagenist would argue that when a quon is not being measured, it has no definite dynamic attributes. This idea that observation creates reality is based on the so-called quantum meter option, i.e. the observer’s ability to freely select which attribute he wants to look at. In terms of our earlier discussion of waveform languages, this quantum meter option is analogous to our freedom of choice concerning which waveform prism we will utilize to analyze an arbitrary wave.Another assumption of the Copenhagen interpretation is that all quons in the same quantum state, i.e. represented by the same wave function, are physically identical. Furthermore, the Copenhagen interpretation asserts that the wave function tells us everything there is to know about the quantum entity. However, Copenhagenists do not believe that the Ψ-wave is a real wave. They view the Ψ-wave simply as a mathematical tool that can be used to determine the statistical likelihood of an event, given a specific measurement context. It is also their position that there is absolutely no way to know which possibility will become an actuality.
In classical mechanics, the unpredictability of an event was attributed to the ignorance of the observer. The observer’s ignorance, in the classical sense, arose because the observer did not have a complete knowledge of all the variables in a system, or the measuring device used in the observation was technologically unable to yield perfectly accurate readings. It was assumed that this ignorance could be overcome by making further technological improvements to the measuring devices. However, in the Copenhagen interpretation, it is impossible to predict which possibility will become an actuality simply because the deepest form of knowledge we can have of a quantum system is purely statistical. This type of ignorance is known as quantum ignorance, as opposed to classical ignorance. Classically, the missing information exists, but has yet to be uncovered by the experimenter. The idea of quantum ignorance asserts that the missing information simply does not exist.
Quantum ignorance is closely tied to the idea of quantum randomness. In order to understand this idea better, let’s consider the quon gun and detector screen experiment again. Assume that the gun fires only one quon. The wave function, Ψ, gives us a complete description of the probabilities of each possibility. Before the measurement, the quon assumes all possible paths. The result of the measurement yields only one actual flash on the phosphor screen. Now, suppose we fire a second quon at the screen. This second quon is represented by the exact same Ψ-wave as the first quon. The result of the second measurement again yields only one actual flash; however, this flash is most likely located in a different place on the phosphor screen relative to the first flash.
The Copenhagenists explain this phenomenon by appealing to what they call quantum randomness. The basic principle of quantum randomness is that identical physical situations give rise to different outcomes. If it is true that the Ψ -wave gives us all the information we can know, then it is impossible to predict exactly where the quon will strike the phosphor screen. According to the Copenhagists, the occurrence of an actual event is determined by blind chance. We shall soon get a better idea of how these quantum fields, which organize the probability distribution of our system, are extremely complex and multidimensional. In any case, the visualization of these fields is beyond the capacity of most physicists working within the current paradigm. At this point, we will merely note for future reference that the dynamics of a given field of possibilities may be so extraordinary chaotic that the pattern appears random to our ordinary mode of perception.
It should be noted that the Copenhagen interpretation is based on the primary assumption that measuring devices are ordinary objects which exist and are definable in the classical sense. Quantum theory describes neither the quantum system nor the measuring device. The theory applies to the relationship which exists between the quantum system and the measuring device. However, the Copenhagen interpretation asserts that a very significant and mysterious transition takes place at the boundary between the measuring device and the quantum system. In this transition, the surreal potential existence of the unmeasured quantum entity immediately transforms into a real classical type observed actuality. The question as to exactly how, why, and when this transition occurs is the basis of the so-called quantum measurement problem. The Copenhagenists side step this interpretive paradox by assuming that measuring devices are real things which actually exist with definite attributes, while quantum entities are represented by a superposition of potential possibilities.
In 1932, John von Neumann published a book called the Mathematical Foundations of Quantum Mechanics (Die Mathematische Grundlagen der Quantunmechanick), in which the ideas of quantum theory are subjected to rigorous mathematical analysis. Von Neumann’s analysis is primarily concerned with Dirac’s Hilbert space version of quantum theory, which has been shown to be more general and complete than the Heisenberg and Schrödinger theories. Among other things, von Neumann demonstrates that there is nothing intrinsically special about measuring devices. Therefore, the Copenhagenist assertion that measuring devices are somehow privileged with a classical status of existence seems awkward and contrived. In von Neumann’s theory, everything is represented by quantum Ψ-waves, even measuring devices. Von Neumann’s interpretation is known as the all-quantum theory because there is no longer any aspect of the theory which relies on classically defined objects.
Von Neumann showed that it is indeed possible to represent everything in the world with Ψ-waves; however, the all-quantum theory only works if we make one crucial assumption. Before dealing with this assumption directly, let’s consider again the structure and dynamics of the wave function in Hilbert space. We know that a particular quantum state is represented by a normalized vector in Hilbert space. The dimensionality of our frame of reference in Hilbert space is determined by the attribute operator we choose. It is often the case that this frame of reference consists of an uncountably infinite number of dimensions. Each dimension represents an orthonormal eigenfunction of the quantum operator that we are using. Each of these orthonormal eigenfunctions represents a specific attribute value of the quon we are measuring.
The amplitudes and phases of each possibility are determined by decomposing the wave function into complex vector components, which are just the projections of the wave function along each dimension. However, a quantum system has a definite value for an observable attribute if and only if the quantum vector, Ψ, is an eigenstate of the attribute operator. This means that the system only has a definite state if the quantum vector is parallel to a particular eigenfunction. Since each eigenfunction of the operator is independent, or orthonormal, to all other eigenfunctions, any vector which lies along one single eigenfunction has no components along any of the other eigenfunctions. In other words, if the quantum vector lies along one specific eigenfunction, the amplitude at that possibility is one, and the amplitudes at all other possibilities are zero. However, in most cases, the wave function can only be expressed as a linear combination consisting of coordinates from many eigenfunctions.
According to Feynmann’s version of quantum theory, the unmeasured quon assumes all possible values at the same time. Contrary to this idea, in which the quon assumes all possibilities at once, is the actual fact that any type of measurement only yields one specific result. Therefore, in order for von Neumann’s theory to be consistent, we must assume that at some point between the creation of the quantum entity in the quon gun and the observation of an experimental result, a remarkable transformation must occur. At the exact instant the measurement occurs, the quantum entity must cease to be a superposition of possibilities, and must contract into a single possibility, corresponding to the single observed measurement result. This mysterious and radical transformation is called the collapse of the wave function. Von Neumann’s all-quantum theory will not work unless this collapse of the wave function actually occurs in every type of quantum measurement. As alluded to earlier, the fundamental paradox of quantum theory is the so-called quantum measurement problem, which can be stated in the following way: how and when does the wave function collapse?
In von Neumann’s analysis of the quantum measurement problem, he proposed that the measurement act could be broken up into a series of small steps. In this way the entire measurement act is visualized as a chain of events stretching from the quon gun, to the phosphor screen, to the observer’s retinas, and finally to the observer’s conscious perception of the measured result. Von Neumann’s goal was to analyze each link in this chain in order to find the most natural place to put the collapse of the Ψ-wave. What he discovered is that we can cut the chain and insert a collapse anywhere we please, but the theory won’t work if we leave it out. Von Neumann reasoned that the only peculiar link in the chain is the moment when the physical signals in the human brain become an actual experience in the human mind. Based on this form of logic, von Neumann reached the conclusion that human consciousness is the only viable site for the collapse of the wave function. Therefore, according to von Neumann, consciousness creates reality.
This idea of consciousness-created reality is a step beyond the claims made by those who subscribe to the observer-created reality interpretation. Observer-created reality enthusiasts simply claim that the observer is free to choose which attribute will be measured. However, they do not claim that the observer determines what the actual result of the measurement will be. Consciousness-created reality enthusiasts, on the other hand, claim that consciousness selects which one of the many possibilities actually becomes realized. Granted, these claims have not been experimentally proven, yet we might still consider some general consequences of this interpretation of quantum theory. If we assume that the basic principles of quantum theory are correct, we can easily derive two such interesting general conclusions. Firstly, as far as the final results are concerned, there is no natural boundary line between the observer and the observed system. Secondly, it is apparently the case that no such interpretation of quantum theory would be complete unless it successfully incorporates the function of consciousness, which seems to be inseparable from the manifestation of particular outcomes in the quantum measurement.
There is, however, another interpretation of quantum theory, which is similar to von Neumann’s ideas, but is not dependent on the idea of a wave function collapse. This theory, called the many-worlds interpretation, was developed by Hugh Everett in 1957. Everett, like von Neumann, assumes that there is nothing special about measuring devices and that everything can be represented by Ψ-waves. However, Everett leaves out the collapse of the wave function. Instead, his theory is based on the idea every possible attribute value of a quon actually becomes realized when the quon interacts with a measuring device. For example, if the quon can assume six possible attribute values, then all of these possibilities actually occur. Everett claims that the entire measurement device branches into many measurement devices, each of which observe a different possible value of the chosen attribute. Given that nobody has ever seen a measuring devices spit apart in such a way, Everett claims that each possible value is realized in its own parallel universe.
Everett’s quantum model implies that at every instant, the universe is a branching tree in which anything that can happen, no matter how improbable, actually does happen. As far out as this claim might seem to our simple egos, this many-worlds interpretation actually addresses the fundamental inconsistencies of quantum theory in a satisfactory manner. For instance, there is no such attempt to sanctify the status of measuring devices. In addition, there is no need for the mysterious notion of the wave function collapse, which in itself, has never been detected, nor is their any a priori evidence which supports its existence other than the fact that we humans only perceive the occurrence of one event at a time.
Up until now we have only considered the orthodox Copenhagen interpretation and its primary derivatives, i.e. von Neumann’s all quantum theory, and Everett’s many-worlds interpretation. These theories accept as their basic premise that the fundamental level of reality, namely the quantum world, is governed solely by the statistical laws of quantum possibilities. In addition, these theories also accept the idea that quantum entities are not ordinary objects in the classical sense. An ordinary object possesses definite attributes independently of the observation of those attributes. Indeed, von Neumann, in his book on the foundations of quantum mechanics, derived a proof which asserts that if quantum theory is correct, then the world cannot be made of ordinary objects. However, despite the strong convictions amongst the majority of physicists that no such ordinary object model of reality could be consistent with the quantum facts, there is a group of physicists which believe that such a model could indeed be produced.
The most famous of these physicists, who opposed the quantum orthodox interpretation, is Albert Einstein. Einstein strongly believed that quantum theory was incomplete because it only gave a statistical account of elementary phenomenon. He believed that it was possible to construct an ordinary object model of reality in which the quantum entities had definite attributes whether or not anybody was observing them. Einstein and the other physicists who believe that an ordinary object model of reality is possible are sometimes referred to as neorealists. The neorealist position is basically that there exists a deeper, more fundamental, level of reality, which is not described by the quantum wave function. As we have already seen, if we assume that the Ψ-wave tells us everything there is to know about the quon, then it is impossible to predict what the actual result of a measurement will be. The neorealists believe that the Ψ-does not tell us everything there is to know. Indeed, they hold the position that hidden, unseen parameters exist at a deeper level, which if discovered, could be used to predict exactly what will happen in a quantum experiment. For this reason, neorealist theories are also known as hidden-variable theories.
As noted above, von Neumann’s proof asserts that no such theory of ordinary objects can explain the quantum facts. However, David Bohm, a protégée of Albert Einstein, was able to develop a hidden-variable theory which is seemingly consistent with the observed quantum facts. Bohm’s hidden-variable model of reality, which was developed in 1952, assumes that quantum entities are ordinary objects, such as real particles, which have at all times a definite position and momentum. Whereas the Copenhagen interpretation assumes that an unmeasured quon assumes all possibilities at once, Bohm’s theory assumes that an unmeasured quon takes only one path and that this path is ultimately predictable. However, there is a catch. Bohm’s theory introduces a new type of wave called the “pilot wave”, which organizes the unfolding history of the quantum entity.
The Copenhagenists assert that the Ψ-wave is not real, but merely a fictitious mathematical device which happens to be effectively useful in calculating quantum probabilities. Bohm, on the other hand, asserts that both the quantum entity and the pilot wave are real things which actually exist. Although the pilot wave is supposedly a real entity, in order for Bohm’s theory to be consistent with the facts, this pilot wave must have certain remarkable characteristics which defy our conventional definitions of what is possible in reality. For instance, this pilot wave must connect with every particle in the universe, it must be entirely invisible, and it must transfer information at superluminal speeds, i.e. faster than light.
Of these three, the first two properties of Bohm’s pilot wave are familiar within physics in that they are both aspects of the gravitational and electromagnetic fields. Superluminal connections, on the other hand, seem to be the one thing most physics hate most. This is primarily because the existence of superluminal connections would violate many fundamental assumptions of the orthodox theory on physical reality. For example, real superluminal transfers would contradict the orthodox themata which asserts that influences can only be mediated by direct interactions. This assumption, that object A can only effect object B via direct subluminal interactions, is called the locality assumption. Also, faster than light connections directly imply that the past can be influenced by the future. Most physicists, however, would like to believe that time travels in only one direction, and that what happens within each moment is solely influenced by what has already happened.
In Bohm’s model, each particle in the universe, it is assumed, is associated with a pilot wave. This pilot wave is sensitive to the entire environment of the quantum entity, and the wave changes its form instantly whenever there is a change anywhere in the environment. Conversely, this instantaneously morphing field informs the quantum entity of such changes in the environment, at which point the quantum entity alters its values of position and momentum accordingly.
However, this theory predicts that all pilot waves of all particles are instantaneously connected across the entire universe. This implies that the relevant environment, or measurement situation, which determines the form of the pilot wave, includes all events in the universe across all dimensions of space-time. Understandably, most physicists abhor the idea of faster than light, let along instantaneous, connections, and consequently, many physicists consider Bohm’s theory to be absurd. However, although it seems absurd to the quaint common sense intuitions of most physicists, it was soon proven that these superluminal connections are no accident, but a necessary condition of any theory of reality. Big news!
EPR, Bell’s Theorem, Non-locality, and Superluminal Spaghetti
This proof we mentioned above was devised by John Stewart Bell in 1964, and is known as Bell’s interconnectedness theorem. Bell, while studying Bohm’s theory, was able to show how an ordinary object model of reality had been created contrary to the proof of von Neumann, which asserted that no such theory was possible. Obviously, von Neumann’s proof contained a loophole. Bell showed that von Neumann’s idea of an ordinary object was too limited. Bohm was able to create such a theory by stretching the conventional idea of an ordinary object. Most physicists would not consider any object ordinary if it can change its attributes instantaneously via resonance with some invisible, all-pervasive, superluminal field.
Bell’s theorem, which Bell developed after his work on Bohm and von Neumann, brings into question the assumption of a locally based version of reality, and ultimately proves that the reality which underlies our experience must be non-local. This proof was based on the factual results of an experiment originally designed by Albert Einstein, Boris Podolsky, and Nathan Rosen. Before taking a deeper look into Bell’s theorem and non-locality, let’s briefly discuss the logistics of Einstein’s experiment, which has since come to be known as the EPR experiment.
As described earlier, Albert Einstein believed that quantum theory was not a complete theory of reality. Thus, Einstein designed a specific thought experiment, which supposedly demonstrates that there are aspects of reality that are not accounted for in the quantum theory. In brief, the EPR source emits a pair of phase entangled photons in opposite directions at the speed of light toward two spatially separated detectors. Let’s label these detectors A and B. In a generic form of the EPR experiment, these detectors are designed to measure the polarization attribute of the photons. A simple form of a polarization detector can be realized by using a calcite crystal whose optic axis is pointing in a certain direction. The crystal divides light into two beams. The up beam consists of photons which are polarized along the optic axis, while the down beam consists of photons which are polarized at right angles to the optic axis. Because the photons are phase entangled, the phase of each photon depends on what the other photon is doing. Also, there is only one wave function, which describes both photons. Before the actual measurement, quantum theory predicts that neither photon has a definite value of polarization.
If we assume that each calcite detector is positioned at any arbitrary angle, then each detector will measure a fifty-fifty percent mixture of up / down results. On the other hand, if we assume that each detector is orientated at the same angle, then we can measure another type of attribute called the parallel polarization attribute. In this case, both photons are always measured to have the same polarization. If the two detectors hold their crystals at a relative angle of ninety degrees, then the polarization value at one detector will always measure the opposite as the other detector. As an example, let’s assume that detector A holds its crystal at zero degrees, and that this A detector is located closer to the source than B is. This way, the polarization at A is detected first. Also, let’s assume that at an angle of zero degrees, A measures an up value. Quantum theory predicts that if B holds its crystal at zero degrees, it will measure up as well. On the other hand, if B holds its crystal at ninety degrees, it will measure a down valued polarization. If B holds its crystal at angles other than zero or ninety degrees, quantum theory gives no definite results. For example, if B holds its crystal at forty-five degrees relative to A, then the odds are fifty-fifty that B will measure an up value.
Quantum theory predicts that except at certain angles, such as zero and ninety degrees, the result of B’s measurement is determined by quantum randomness. In other words, at angles between zero and ninety degrees, the measurement at B is determined by blind chance. However, Einstein argues that since the photons are in what can be called a twin state, if detector A is measured first at any particular angle, then the photon at the other detector must possess a definite polarization attribute value prior to its interaction with the detector, which could be set at any angle. Einstein also argues that quantum theory only gives a statistical interpretation of attribute values which truly have a definite existence before the act of measurement. Therefore, Einstein concludes that quantum theory is not a complete theory of reality. The basic assumption which Einstein makes is that, after the photons have left the source, the situation at detector B is not affected by how detector A chooses to hold its crystal. This premise is known generally as the locality assumption. Einstein’s argument can only be refuted in two ways: either the locality assumption is violated, or there is no such thing as two spatially separated events. This perplexing thought experiment is known as the EPR paradox.
While studying this thought experiment, Bell considered what would happen to the measurements of each detector if the calcite crystals vary their angles between zero and ninety degrees. Bell incorporates a type of polarization attribute which measures how these results are correlated. This attribute can be called the polarization correlation attribute, labeled PC(θ). As before, if the relative angular difference between each detector is zero, then the measurements are perfectly correlated, thus PC(0) = 1. If the crystals are set at a difference of ninety degrees, the measurements are perfectly uncorrelated, thus PC(90) = 0. At angles between zero and ninety degrees the value of PC is some fraction between 0 and 1.
The value of PC(θ), for angles between zero and ninety degrees, can be measured by firing many pairs of phase entangled photons and then comparing the series of measurement values recorded at each detector. The polarization correlation attribute is a measure of the fraction of matches between two detectors over a long series of photon pair emissions. Imagine that each list of measurements is a type of binary message. If A and B receive exactly the same messages, then the PC(θ) value is one, and the angle between each crystal must be zero degrees. If A and B receive exactly opposite messages, then the PC(θ) value is zero, and the relative angle must be ninety degrees. In between these two extremes, the two messages will contain a fraction of errors. For example, let’s assume that if the crystals are orientated at a relative angle α, then the two binary messages differ by one out of every four bits. In other words, the error rate between the two messages is ¼. Thus, at angle α, the polarization correlation attribute value, PC(α), is a factor of three correlated results for every four photon pairs, that is, PC(α) = ¾. Everything presented so far concerning this type of experiment is based purely on scientific fact.
To understand Bell’s theorem, let’s first assume that both crystals are set vertically at zero degrees. Now we rotate the A crystal α degrees in the clockwise direction, and we rotate the B crystal α degrees in the counterclockwise direction. These crystals are now separated by a relative difference of 2α degrees. Bell, like Einstein, makes only one fundamental assumption, which asserts that the situation at detector A does not effect what is happening at detector B; this is known as the locality assumption. This assumption appears reasonable since these photons are flying away from each other at the speed of light. If we assume locality, it follows that if the error rate at angle α is equal to ¼, then the error rate at angle 2α must be less than or equal to ½. This expression is an example of what is known as Bell’s inequality. This inequality is a direct consequence of the locality assumption.
However, the equation for the polarization correlation attribute can be derived mathematically such that PC(θ) = cos 2 θ. For this equation, PC(30) = ¾, and the error rate equals ¼; that is, one error between each message for every four photon pairs. However, at twice this angle, PC(60) = ¼, and thus, the error rate becomes ¾. This result is a direct violation of Bell’s inequality, which predicts that the fraction of errors between the messages cannot be greater than ½. Let’s recap Bell’s argument. First we assume that reality is strictly local. This assumption leads directly to a specific inequality. Surprisingly, whenever this experiment is performed, the results violate this inequality. Therefore, since we reached a contradiction, our original assumption must be false, i.e. reality is non-local.
Bell’s proof does not demonstrate any observable type of non-local interaction. He merely proved that the correlation between two twin state photons is so strong that no version of local reality can account for the mathematically predicted violation of Bell’s inequality. Indeed, Bell’s idea was experimentally put to the test, first by John Clauser in 1972, and later by Alain Aspect in 1982. Both of these technologically sophisticated experiments produced results that directly violate Bell’s inequality. Therefore, unless we resort to drastic counter-arguments, such as the claim that there is no reality at all, or that everything in the world is entirely predetermined to the infinite degree, there is no other way to save the locality assumption from its decent into the dustbin of history. Through careful consideration of the experimental facts, it is now safe to say that locality is just as outdated and incorrect as the idea that the Earth is flat.
Although John Bell only proved that non-locality is a necessary factor in describing a particular twin-state photon experiment, we can extend this idea to include everything that exists in reality. We can make this type of assertion because quantum theory predicts a phenomenon known as phase entanglement. Whenever two quantum entities interact, their phases get mixed up. As these entities interact and then depart their separate ways, the amplitudes of each Ψ-wave come apart, but the phases of the two quons remain connected. Indeed, the strong correlation between these two EPR photons is a direct result of the fact that they were created from the same source, and are thus, phase entangled. This prediction of phase entanglement was recognized by physicists which preceded Bell; however, Bell was to the first to actually demonstrate that this phenomenon actually exists in the real world.
The basic idea of phase entanglement and non-locality rests on the idea that once two entities have interacted, they are eternally connected by the correlation between their mutual phases. An important consequence of all this is the fact that the so-called entire measurement situation, which determines the attribute values of a quon, must include situations, measurements, and events everywhere throughout the universe. Moreover, non-locality implies that the entire measurement situation, of even a simple quantum experiment here on Earth, must include all measurements and events everywhere in the universe across all scales and dimensions of time. Presented another way, non-locality implies that every thing is everywhere, and no thing is really separate from anything else. Indeed, everything together is only one thing. If we try to restrict ourselves by measuring only part of the one thing, we will inevitably encounter a limit on how accurate our predictions may be. After all, there’s nothing special about Plank’s constant h. Perhaps the unavoidable quantum uncertainly is a direct consequence of our naïve assumption that we are separate from that which we are observing.
Hyperdimensional Holon Attractors and the B-Sense
Thus far, we have embarked on quite a lengthy discussion of quantum theory and it’s various interpretations. Although many concepts have been addressed in this project, the majority of the details have been left out. In addition, our exploration through this quantum realm has merely scratched the surface, and much of the most interesting terrain has yet to be explored. Although there are more advanced forms of exploration which lie beyond the scope of this project, it is my hope that this preliminary exploration will form a stable foundation such that further developments and interpretations may be explored at a latter time. For now, let’s conclude this journey with an overall survey of some ideas which might form the basis of future, more detailed, endeavors into quantum theory. It should be obvious to the reader that the implicate seeds contained within some of the ideas to follow will certainly contradict, and bring into question, many of our presently held notions concerning the nature of reality and consciousness.
First of all, quantum theory, in the broadest sense, is a theory of whole entities. Any representation of a quantum entity must include a joint description of the entity itself as well as its observational context. It must be remembered at all times that there is no real distinction between the attributes of any aspect of reality and the experience of those attributes relative to a specific observer. Furthermore, if multiple observers are measuring the same quantum system but in different ways, the experience of these observers will also differ. That is, the experience of reality is relative to one’s frame of reference. In addition, regardless of whether we subscribe to an ordinary-object based interpretation, or to a statistical interpretation, the idea that the manifestations of physical reality are self-organized by abstract fields of possibility is unavoidable.
Another unavoidable conclusion is that these fields must be interconnected in such a way that it makes absolutely no sense to speak of them as separate fields. For example, let’s consider two seemingly separate quantum entities, each resented by its own quantum vector in its own frame of reference in Hilbert space. If these two entities become entangled, then the composition of two Hilbert spaces, H a and H b , can be represented by the tensor product H a Ä H b , which itself forms an entirely new vector in a new Hilbert space. In other words, entangled entities are not represented by separate quantum fields, but are represented by only one Ψ-wave. However, it my contention that every aspect of reality is already phase entangled. This would certainly be the case if the cosmological Big Bang theory were correct. If it is true that everything that exists is indeed part of one phase entangled quantum system, then it might prove useful to consider the likely existence of a universal wave function. Although this field would be incomprehensibly complex, the nature of non-locality assures us that whatever it is, it is within every thing.
Another interesting aspect of quantum theory is the manner in which quantum waves morph over time. To consider a particular example, let’s measure the position attribute of a quon. First of all, we shall assume for simplicity that we live in 3-dimensional Euclidean space. The realm of possible values for position includes all points in a 3D continuum. Obviously, there are an uncountably infinite number of possible positions. Each point in 3D configuration space is represented by a single dimension in Hilbert space. The wave function of our quon, is a vector in this space, which has a unique decomposition into complex projection components along each dimension. To find the probability that the quon will be measured at the point (x o , y o , z o ), we simply take the square of the amplitude of that possible point at instant t o . At a different instant in time, t 1 , the quantum state will be represented by a different quantum vector. If we assume time is a continuum, the transition between these two states can be visualized as a spinning vector in Hilbert space.
However, it appears that we could modify our example to give the full picture at one glance, as opposed to watching our quantum vector spin around. Firstly, we will expand our domain of possibilities from all points in 3D configuration space to the domain of all points in 3D configurations space for all time. Thus, each possible value of position is now a point (x, y, z, t). The corresponding Hilbert space is exactly what we would get if we assumed each possibility is a point in a 4D continuum. Thus, we still have an uncountably infinite number of dimensions, albeit a much larger uncountable infinity. Regardless, we can still represent the wave function as a vector in this new Hilbert space which decomposes into projections along each dimension. The square of the possibility amplitude in this example will give the probability of measuring the quon at a point in a 4-dimensional continuum. Both these examples are representations of the same thing, except that the first example required a spinning quantum vector to represent all possible instants, whereas the second example required only one quantum vector, which represents the entire quantum state from a higher dimensional perspective. The upshot of this argument is that any particular morphing field of possibilities can be represented, alternatively, by a single stationary vector at a higher dimensional level of mathematical abstraction.
We have also seen that one quantum vector in Hilbert space looks exactly like every other, namely a unit vector which has a absolute magnitude of one. Indeed, it is merely our choice of which attribute we want to measure that determines the probability distribution of all unrealized possibilities. The quantum vector can only be analyzed by choosing a specific frame of reference. In this way a quon’s tendencies to exist in a certain state are inseparably determined by how we choose to observe the system. Since all quantum vectors in Hilbert space are essentially the same, and since the only perceived difference is a result of different possible choices of a frame of reference, it appears safe to say that there may only be one quantum entity. This is indeed the basic assertion of quantum theory, which utilizes one basic description for all possible quantum entities, namely an abstract wave function, Ψ, and a particular reference frame. Let’s suppose, for fun, that there is only one fundamental quantum entity. Outside the context of a measurement situation, it is meaningless to say anything about this entity. However, once we define a frame of reference, we then can derive the basic characteristics the Ψ-wave, which represents a specific attribute, or quality, of the one quantum entity. I feel that it might be useful to introduce a new concept to the existing version of quantum theory. As before, we understand Ψ to be an abstract field of possibilities within a given reference frame. Now, let us introduce a new symbol , which we shall define as the field of all possible reference frames. Whereas Ψ is a representation of the quantum entity given one specific attribute reference frame, is a broader representation of the quantum enitity given all possible attribute reference frames. This concept is somewhat analogous to putting an arbitrary wave through all possible waveform family prisms at the same time. I not sure if this is actually possible, or what the actual result might be; however, I feel that the basic idea could be handled simply my modifying the existing structure of quantum theory.
For example, each possible reference frame could be represented by an independent dimension in some new type of space for which we have no name. Obviously, there are an infinite number of possible reference frames, and thus this symbol truly represents an infinite-dimensional field, which includes all possibilities. Whereas the coordinate values of Ψ are represented by complex vectors, the coordinate values of could be represented by vectors in Hilbert space; that is, each dimension in our new space represents a possible Hilbert space. If each specific frame of reference is a sense, i.e. context, then the field of all possible reference frames is truly the broadest sense. In general, we could say that by itself, is completely undefined, and at the same time, assumes all possibilities at once. Any particular Ψ-wave is generated simply by slicing this infinite-dimensional field with a lower-dimensional reference frame. This idea of slicing is a metaphor used for the creation of level-sets, which are lower- dimensional projections of a higher-dimensional object. In other words, all morphing quantum fields of possibility are created by reflecting at different angles.
Each angle of perception constitutes it’s own frame of reference. In reality, all possible reference frames, or dimensions, are realized simultaneously. However, as a result of our ordinary mode of human consciousness, we specific ego-centered entities only perceive reality along one dimension at a time. If one is able to broaden his perception to include multiple reference frames, it is possible to experience reality along more than one dimension at a time. This is by no means a rigorous treatment of the concept in question; however, it is purposed simply because it is interesting to consider such claims given our current exploration into the unknown.
In addition, it will also be said that this idea, outlined above, will not work unless we assume that the ultimate source of all creation is right now. This claim seems justified in a number of ways. For instance, who has ever had a real experience of the future or the past anyway? Every experience of reality is always in the now. The past and the future can only be represented by fields of possibility; however, at every now, only one thing is actually happening. The idea, that only now exists, is consistent with experimental fact because every type of quantum measurement yields only one actual event. It is my claim, that from the perspective of an infinite-dimensional field of all possible reference frames, everything in space-time already exists right now. From the perspective of a lower-dimensional reference frame, events appear to be separated by space and time.
Another claim, which I feel is justified, is that the existing formulation of quantum theory applies to all entities regardless of their size. Quantum theory was discovered in the realm of atomic and sub-atomic particles because at these scales of reality, the effects of quantum waves become dramatically obvious. It is generally assumed that at a certain limit, the quantum laws converge to the normal everyday laws of ordinary experience. This may be the case for many types of attributes which physicists are preoccupied with, but in no way does it rule out the possibility that there may exist presently undiscovered quantum relationships between macroscopic entities such as humans, plants, star systems, or ant colonies.
Logically, quantum theory applies to all things primarily because everything is made from the same stuff. It is all woven from the same fabric. It should be obvious that there is no natural division between the realm of a super-cluster of galaxies and the realm of a bunch of quarks. However, macroscopic entities such as humans and stars are not made of atoms, nor are they made of quarks. In general, it seems that everything consists of frequencies of energy-mass; however, on an even deeper level, these frequencies don’t exist unless we first define a reference frame. Therefore, it appears that the ultimate stuff of reality is simply pure infinite possibility
As noted earlier, quantum entities are necessarily whole beings. Obviously, physical scientists have been able to detect atomic and sub-atomic phenomenon; however, I would argue that as opposed to being made of such building-block like parts, each quantum whole is a hyperdimensional complex within which reside lower-dimensional wholes. At the same time, each whole is embedded in a broader context of an even higher dimension. In other words, the fields which organize individual quarks are contained within broader sense fields which organize individual atoms. Atom fields, in turn, can be represented within molecular fields, which can be represented within cellular fields. In this way, we can easily conceptualize human fields, collective species fields, planetary fields, star system fields, and galactic fields.
Modern travel-egg:
used primarily for interdimensional
trips to parallel universes
A possible dimension you may be
interested in travelling to
It is also extremely likely that similar fields exist which organize other types of dynamic systems as well. The following list represents just a few examples: the weather, the stock market, a flock of birds, the rise and fall of human civilizations, and the development of an embryo. I would even go one step further and propose that quantum Ψ-waves could also be utilized to represent entities such as thoughts, ideas, dreams, and memories. These exotic entities, such as ideas, should qualify as quons because they have both continuous wave-like characteristics, as well as discrete particle-like characteristics. Indeed, it seems obvious that any generic quantum entity is quite similar to a memory in that it is possible to represent both using abstract fields of possibility. One thing is for sure, the realm of quantum waves is more like an ocean of ideas than like a box, filled with the hard ordinary objects we’re used to here in physical reality.
Physicists have been able to derive formulas for elementary quantum processes because they are simple in comparison to the more complex entities such as galaxies and ideas. It is extremely difficult to derive the wave function for a molecule, let alone a human being. In fact, I would say that it is impossible to calculate the dynamic field properties of a simple multi-cellular organism even with the most advanced supercomputers of the next 100 years. You might as well forget about using a pencil and paper. The main reason is that there are simply too many variables to keep track of. The only available mechanism, which is capable of computing such astronomically complex forms of relationships, is the electromagnetic neuro-chemical circuitry of the organismic bio-computer that we call the human body. In other words, we already possess a natural mechanism which can navigate through these fields intuitively, as opposed to analytically.
Moreover, advanced forms of bio-technology, such as human beings and stars, can easily tap into even more powerful systems of organic bio-technology. For example, humans can open a direct connection to the planet, our larger whole, which in itself, is an incomprehensibly more evolveded expression of the one quantum entity. This idea is analogous to a network of computers which are all connected by a main frame or a hub. The sun, in turn, can open a direct connection to the center of the galaxy, an even larger whole. Implicit in this view is the necessary assumption that all forms of quantum whole entities are expressions of consciousness. This does not mean that galaxies are conscious in the same way that humans are, but it does imply that all forms of creation, no matter how alien, are truly conscious in their own way.
In fact, it seems that humans today are operating in safe-mode, which mysteriously limits our capabilities to roughly ten-percent of our full computing potential. If we were able to turn on to our full potential, we might really be surprised by the complexity, and dimensionality, of the patterns which we are capable of perceiving. As a final remark, it is important to note again that quantum theory does not apply solely to the unbelievably small scale of electrons, photons, and quarks. Quantum theory is actually a mathematical description of a fundamentally deeper level of reality, which precedes and organizes the manifestations of physical existence according to the dynamics of hyperdimensional fields of possibility. The ultimate source of all these fields is most likely the cosmic imagination of God/Goddess, which expresses itself through all things. At the most fundamental level of reality, everything in the Universe is a seamless unbroken extension of itself, which, in turn, is constantly observing itself at different angles, and re-creating itself right now in an infinite number ways. Oh psy . . . . . it’s time to wave goodbye!
Hyperdimensional human in trance
Hyperdimensional human beyond 2012
Bohm, David. Wholeness and the Implicate Order. Routedge London and New York, 1980.
Herbert, Nick. Quantum Reality: Beyond the New Physics. Doubleday, 1985.
Von Neumann, John. The Mathematical Foundations of Quantum MechanicsPrinceton
University Press, 1955.
Zukav, Gary. The Dancing Wu Li Masters. Bantam New Age. 1979.
Return to GaianXaos |
760c0bc7fe27ff39 | Overcoming inertia
The tremendous accelerations involved in the kind of spaceflight seen on Star Trek would instantly turn the crew to chunky salsa unless there was some kind of heavy-duty protection. Hence, the inertial damping field.
— Star Trek: The Next Generation Technical Manual, page 24.
For a space opera RPG setting I am considering adding inertia manipulation technology. But can one make a self-consistent inertia dampener without breaking conservation laws? What are the physical consequences? How many cool explosions, superweapons, and other tropes can we squeeze out of it? How to avoid the worst problems brought up by the SF community?
What inertia is
As Newton put it, inertia is the resistance of an object to a change in its state of motion. Newton’s force law F=ma is a consequence of the definition of momentum, p=mv (which in a way is more fundamental since it directly ties in with conservation laws). The mass in the formula is the inertial mass. Mass is a measure of how much there is of matter, and we normally multiply it with a hidden constant of 1 to get the inertial mass – this constant is what we will want to mess with.
There are relativistic versions of the laws of motion that handles momentum and inertia for high velocities, where the kinetic energy becomes so large that it starts to add mass to the whole system. This makes the total inertia go up, as seen by an outside observer, and looks like a nice case for inertia-manipulating tech being vaguely possible.
However, Einstein threw a spanner into this: gravity also acts on mass and conveniently does so exactly as much as inertia: gravitational mass (the masses in F=Gm_1m_2/r^2) and inertial mass appear to be equal. At least in my old school physics textbook (early 1980s!) this was presented as a cool unsolved mystery, but it is a consequence of the equivalence principle in general relativity (1907): all test particles accelerate the same way in a gravitational field, and this is only possible if their gravitational mass and inertial mass are proportional to one another.
So, an inertia manipulation technology will have to imply some form of gravity manipulation technology. Which may be fine from my standpoint, since what space opera is complete without antigravity? (In fact, I already had decided to have Alcubierre warp bubble FTL anyway, so gravity manipulation is in.)
Playing with inertia
OK, let’s leave relativity to the side for the time being and just consider the classical mechanics of inertia manipulation. Let us posit that there is a magical field that allows us to dial up or down the proportionality constant for inertial mass: the momentum of a particle will be p=\mu m v, the force law F=\mu m a and the formula for kinetic energy K=(1/2) \mu m v^2. \mu is the effect of the magic field, running from 0<\mu<\infty, with 1 corresponding to it being absent.
I throw a 1 g ping-pong ball at 1 m/s into my inertics device and turn on the field. What happens? Let us assume the field is \mu=1000. Now the momentum and kinetic energy jumps by a factor of 1000 if the velocity remains unchanged. Were I to catch the ball I would have gained 999 times its original kinetic energy: this looks like an excellent perpetual motion machine. Since we do not want that to be possible (a space empire powered by throwing ping-pong balls sounds silly) we must demand that energy is conserved.
Velocity shifting to preserve kinetic energy
Radiation shieldingOne way of doing energy conservation is for the velocity to go down for my heavy ping-pong ball. This means that the new velocity will be v/\sqrt{\mu}. Inertia-increasing fields slow down objects, while inertia-decreasing fields speed them up.
One could have a force-field made of super-high inertia that would slow down incoming projectiles. At first this seems pointless, since once they get through to the other side they speed up and will do the same damage. But we could of course put in a bunch of armour in this field, and have it resist the projectile. The kinetic energy will be the same but it will be a lower velocity collision which means that the strength of the armour has a better chance of stopping it (in fact, as we will see below, we can use superdense armour here too). Consider the difference between being shot with a rifle bullet or being slowly but strongly stabbed by it: in the later case the force can be distributed by a good armour to a vast surface. Definitely a good thing for a space opera.
A spacecraft that wants to get somewhere fast could just project a low \mu field around itself and boost its speed by a huge 1/\sqrt{\mu} factor. Sounds very useful. But now an impacting meteorite will both have an high relative speed, and when it enters the field get that boosted by the same factor again: impacts will happen at velocities increased by a factor of 1/\mu as measured by the ship. So boosting your speed with a factor of a 1000 will give you dust hitting you at speeds a million times higher. Since typical interplanetary dust already moves a few km/s, we are talking about hyperrelativistic impactors. The armour above sounds like a good thing to have…
Note that any inertia-reducing technology is going to improve rockets even if there is no reactionless drive or other shenanigans: you just reduce the inertia of the reaction mass. The rocket equation no longer bites: sure, your ship is mostly massive reaction mass in storage, but to accelerate the ship you just take a measure of that mass, restore its inertia, expel it, and enjoy the huge acceleration as the big engine pushes the overall very low-inertia ship. There is just a snag in this particular case: when restoring the inertia you somehow need to give the mass enough kinetic energy to be at rest in relation to the ship…
This kind of inertics does not make for a great cannon. I can certainly make my projectile speed up a lot in the bore by lowering its inertia, but as soon as it leaves it will slow down. If we assume a given amount of force F accelerating it along the length L bore, it will pick up FL Joules of kinetic energy from the work the cannon does – independent of mass or inertia! The difference may be power: if you can only supply a certain energy per second like in a coilgun, having a slower projectile in the bore is better.
Note that entering and leaving an inertics field will induce stresses. A metal rod entering an inertia-increasing field will have the part in the field moving more slowly, pushing back against the not slowed part (yet another plus for the armour!). When leaving the field the lighter part outside will pull away strongly.
Another effect of shifting velocities is that gases behave differently. At first it looks like changing speeds would change temperature (since we tend to think of the temperature of a gas as how fast the molecules are bouncing around), but actually the kinetic temperature of a gas depends on (you guessed it) the average kinetic energy. So that doesn’t change at all. However, the speed of sound should scale as \propto 1/\sqrt{\mu}: it becomes far higher in the inertia-dampening field, producing helium-voice like effects. Air molecules inside an inertia-decreasing field would tend to leave more quickly than outside air would enter, producing a pressure difference.
Momentum conservation is a headache
Atlas 6Changing the velocity so that energy is conserved unfortunately has a drawback: momentum is not conserved! I throw a heavy object at my inertics machine at velocity v, momentum mv and energy (1/2)mv^2, it reduces is inertia and increases the speed to v/\sqrt{\mu}, keeps the kinetic energy at (1/2)mv^2, and the momentum is now mv/\sqrt{\mu}.
What if we assume the momentum change comes from the field or machine? When I hit the mass M machine with an object it experiences a force enough to change its velocity by w=mv(1-1/\sqrt{\mu})/M. When set to increase inertia it is pushed back a bit, potentially moving up to speed (m/M)v. When set to decrease inertia it is pushed forward, starting to move towards the direction the object impacted from. In fact, it can get arbitrarily large velocities by reducing \mu close to 0.
This sounds odd. Demanding momentum and energy conservation requires mv = mv/\sqrt{\mu} + Mw (giving the above formula) and mv^2 = \mu m(v/\sqrt{\mu})^2 + Mw^2, which insists that w=0. Clearly we cannot have both.
I don’t know about you, but I’d rather keep energy conserved. It is more obvious when you cheat about energy conservation.
Still, as Einstein pointed out using 4-vectors, momentum and energy conservation are deeply entangled – one reason inertics isn’t terribly likely in the real world is that they cannot be separated. We could of course try to conserve 4-momentum ((E/c,\gamma \mu m v_x, \gamma \mu m v_y, \gamma \mu m v_z)), which would look like changing both energy and normal momentum at the same time.
Energy gain/loss to preserve momentum
Buffer stopsWhat about just retaining the normal momentum rather than the kinetic energy? The new velocity would be v/\mu, the new kinetic energy would be K_1=(1/2) \mu m (v/\mu)^2 = (1/2) mv^2 / \mu = K_0/\mu. Just like in the kinetic energy preserving case the object slows down (or speeds up), but more strongly. And there is an energy debt of K_0 (1-1/\mu) that needs to be fixed.
One way of resolving energy conservation is to demand that the change in energy is supplied by the inertia-manipulation device. My ping-pong ball does not change momentum, but requires 0.999 Joule to gain the new kinetic energy. The device has to provide that. When the ball leaves the field there will be a surge of energy the device needs to absorb back. Some nice potential here for things blowing up in dramatic ways, a requirement for any self-respecting space opera.
If I want to accelerate my spaceship in this setting, I would point my momentum vector towards the target, reduce my inertia a lot, and then have to provide a lot of kinetic energy from my inertics devices and power supply (actually, store a lot – the energy is a surplus). At first this sounds like it is just as bad as normal rocketry, but in fact it is awesome: I can convert my electricity directly into velocity without having to lug around a lot of reaction mass! I will even get it back when slowing down, a bit like electric brake regeneration systems. The rocket equation does not apply beyond getting some initial momentum. In fact, the less velocity I have from the start, the better.
At least in this scheme inertia-reduced reaction mass can be restored to full inertia within the conceptual framework of energy addition/subtraction.
One drawback is that now when I run into interplanetary dust it will drain my batteries as the inertics system needs to give it a lot of kinetic energy (which will then go on harming me!)
Another big problem (pointed out by Erik Max Francis) is that turning energy into kinetic energy gives an energy requirement $latex dK/dt=mva$, which depends on an absolute speed. This requires a privileged reference frame, throwing out relativity theory. Oops (but not unexpected).
Energy addition/depletion makes traditional force-fields somewhat plausible: a projectile hits the field, and we use the inertics to reduce its kinetic energy to something manageable. A rifle bullet has a few thousand Joules of energy, and if you can drain that it will now harmlessly bounce off your normal armour. Presumably shields will be depleted when the ship cannot dissipate or store the incoming kinetic energy fast enough, causing the inertics to overload and then leaving the ship unshielded.
This kind of inertics allows us to accelerate projectiles using the inertics technology, essentially feeding them as much kinetic energy as we want. If you first make your projectile super-heavy, accelerate it strongly, and then normalise the inertia it will now speed away with a huge velocity.
A metal rod entering this kind of field will experience the same type of force as in the kinetic energy respecting model, but here the field generator will also be working on providing energy balance: in a sense it will be acting as a generator/motor. Unfortunately it does not look like it could give a net energy gain by having matter flow through.
Note that this kind of device cannot be simply turned off like the previous one: there has to be an energy accounting as everything returns to \mu=1. The really tricky case is if you are in energy-debt: you have an object of lowered inertia in the field, and cut the power. Now the object needs to get a bunch of kinetic energy from somewhere. Sudden absorption of nearby kinetic energy, freezing stuff nearby? That would break thermodynamics (I could set up a perpetual motion heat engine this way). Leaving the inertia-changed object with the changed inertia? That would mean there could be objects and particles with any effective mass – space might eventually be littered with atoms with altered inertia, becoming part of normal chemistry and physics. No such atoms have ever been found, but maybe that is because alien predecessor civilisations were careful with inertial pollution.
Other approaches
Gravity manipulation
Levitating morris dancersAnother approach is to say that we are manipulating spacetime so that inertial forces are cancelled by a suitable gravity force (or, for purists, that the acceleration due to something gets cancelled by a counter-acceleration due to spacetime curvature that makes the object retain the same relative momentum).
The classic is the “gravitic drive” idea, where the spacecraft generates a gravity field somehow and then free-falls towards the destination. The acceleration can be arbitrarily large but the crew will just experience freefall. Same thing for accelerating projectiles or making force-fields: they just accelerate/decelerate projectiles a lot. Since momentum is conserved there will be recoil.
The force-fields will however be wimpy: essentially it needs to be equivalent to an acceleration bringing the projectile to a stop over a short distance. Given that normal interplanetary velocities are in tens of kilometres per second (escape velocity of Earth, more or less) the gravity field needs to be many, many Gs to work. Consider slowing down a 20 km/s railgun bullet to a stop over a distance of 10 meters: it needs to happen over a millisecond and requires a 20 million m/s^2 deceleration (2.03 megaG).
If we go with energy and momentum conservation we may still need to posit that the inertics/antigravity draws power corresponding to the work it does . Make a wheel turn because of an attracting and repulsing field, and the generator has to pay the work (plus experience a torque). Make a spacecraft go from point A to B, and it needs to pay the potential energy difference, momentum change, and at least temporarily the gain in kinetic energy. And if you demand momentum conservation for a gravitic drive, then you have the drive pulling back with the same “force” as the spacecraft experiences. Note that energy and momentum in general relativity are only locally conserved; at least this kind of drive can handwave some excuse for breaking local momentum conservation by positing that the momentum now resides in an extended gravity field (and maybe gravitational waves).
Unlike the previous kinds of inertics this doesn’t change the properties of matter, so the effects on objects discussed below do not apply.
One problem is edge tidal effects. Somewhere there is going to be a transition zone where there is a field gradient: an object passing through is going to experience some extreme shear forces and likely spaghettify. Conversely, this makes for a nifty weapon ripping apart targets.
One problem with gravity manipulation is that it normally has to occur through gravity, which is both very weak and only has positive charges. Electromagnetic technology works so well because we can play positive and negative charges against each other, getting strong effects without using (very) enormous numbers of electrons. Gravity (and gravitomagnetic effects) normally only occurs due to large mass-energy densities and momenta. So for this to work there better be antigravitons, negative mass, or some other way of making gravity behave differently from vanilla relativity. Inertics can typically handwave something about the Higgs field at least.
Forcefield manipulation
This leaves out the gravity part and just posits that you can place force vectors wherever you want. A bit like Iain M. Banks’ effector beams. No real constraints because it is entirely made-up physics; it is not clear it respects any particular conservation laws.
Other physical effects
Here are some of the nontrivial effects of changing inertia of matter (I will leave out gravity manipulation, which has more obvious effects).
Electromagnetism: beware the blue carrot
It is worth noting that this thought experiment does not affect light and other electromagnetic fields: photons are massless. The overall effect is that they will tend to push around charged objects in the field more or less. A low-inertia electron subjected to a given electric field will accelerate more, a high-inertia electron less. This in turn changes the natural frequencies of many systems: a radio antenna will change tuning depending on the inertia change. A receiver inside the inertics field will experience outside signals as being stronger (if the field decreases inertia) or weaker (if it increases it).
Reducing inertia also increases the Bohr magneton, e\hbar/2 \mu m_e. This means that paramagnetic materials become more strongly affected by magnetic fields, and that ferromagnets are boosted. Conversely, higher inertia reduces magnetic effects.
Changing inertia would likely change atomic spectra (see below) and hence optical properties of many compounds. Many pigments gain their colour from absorption due to conjugated systems (think of carotene or heme) that act as antennas: inertia manipulation will change the absorbed frequencies. Carotene with increased inertia will presumably shift its absorption spectra towards lower frequencies, becoming redder, while lowered inertia causes a green or blue shift. An interesting effect is that the rhodopsin in the eye will also be affected and colour vision will experience the same shift (objects will appear to change colour in regions with a different \mu from the place where the observer is, but not inside their field). Strong enough fields will cause shifts so that absorption and transmission outside the visual range will matter, e.g. infrared or UV becomes visible.
However, the above claim that photons should not be affected by inertia manipulation may not have to hold true. Photons carry momentum, p=\hbar k where k is the wave vector. So we could assume a factor of 1/\sqrt{\mu} or 1/\mu gets in there and the field red/blueshifts photons. This would complicate things a lot, so I will leave analysis to the interested reader. But it would likely make inertics fields visible due to refractive effects.
Chemistry: toxic energy levels, plus a shrink-ray
Projectile warningOne area inertics would mess up is chemistry. Chemistry is basically all about the behaviour of the valence electrons of atoms. Their behaviour depends on their distribution between the atomic orbitals, which in turn depends on the Schrödinger equation for the atomic potential. And this equation has a dependency on the mass of the electron and nucleus.
If we look at hydrogen-like atoms, the main effect is that the energy levels become
E_n = - \mu (M Z^2 e^4/8 \epsilon_0^2 h^2 n^2),
where M=m_e m_p/(m_e+m_p) is the reduced mass. In short, the inertial manipulation field scales the energy levels up and down proportionally. One effect is that it becomes much easier to ionise low-inertia materials, and that materials that are normally held together by ionization bonds (say NaCl salt) may spontaneously decay when in high-inertia fields.
The Bohr radius scales as a_0 \propto 1/\mu: low-inertia atoms become larger. This really messes with materials. Placed in a low-inertia field atoms expand, making objects such as metals inflate. In a high inertia-field, electrons keep closer to the nuclei and objects shrink.
As distances change, the effects of electromagnetic forces also change: internal molecular electric forces, van der Waals forces and things like that change in strength, which will no doubt have effects on biology. Not to mention melting points: reducing the inertia will make many materials melt at far lower temperatures due to larger inter-atomic and inter-molecular distances, increasing it can make room-temperature liquids freeze because they are now more closely packed.
This size change also affects the electron-electron interactions, which among other things shield the nucleus and reduce the effective nuclear charge. The changed energy levels do not strongly affect the structure of the lightest atoms, so they will likely form the same kind of chemical bonds and have the same chemistry. However, heavier atoms such as copper, chromium and palladium already have ordering rules that are slightly off because of the quirks of the energy levels. As the field deviates from 1 we should expect lighter and lighter atoms to get alternative filling patterns and this means they will get different chemistry. Given that copper and chromium are essential for some enzymes, this does not bode well – if copper no longer works in cytochrome oxidase, the respiratory chain will lethally crash.
If we allow permanently inertia-altered particles chemistry can get extremely weird. An inertia-changed electron would orbit in a different way than a normal one, giving the atom it resided in entirely different chemical properties. Each changed electron could have its own individual inertia. Presumably such particles would randomise chemistry where they resided, causing all sorts of odd reactions and compounds not normally seen. The overall effect would likely be pretty toxic, since it would on average tend to catalyze metastable high-energy, low-entropy structures in biochemistry to fall down to lower energy, higher entropy states.
Lowering inertia in many ways looks like heating up things: particles move faster, chemicals diffuse more, and things melt. Given that much of biochemistry is tremendously temperature dependent, this suggests that even slight changes of \mu to 0.99 or 1.01 would be enough to create many of the bad effects of high fever or hypothermia, and a bit more would be directly lethal as proteins denaturate.
Fluids: I need a lie down
Inside a lowered inertia field matter responds more strongly to forces, and this means that fluids flow faster for the same pressure difference. Buoyancy cases stronger convection. For a given velocity, the inertial forces are reduced compared to the viscosity, lowering the Reynolds number and making flows more laminar. Conversely, enhanced inertia fluids are hard to get to move but at a given speed they will be more turbulent.
This will really mess up the sense of balance and likely blood flow.
Gravity: equivalent exchange
I have ignored the equivalence of inertial and gravitational mass. One way for me to get away with it is to claim that they are still equivalent, since everything occurs within some local region where my inertics field is acting: all objects get their inertial mass multiplied by \mu and this also changes their gravitational mass. The equivalence principle still holds.
What if there is no equivalence principle? I could make 1 kg object and a 1 gram object fall at different accelerations. If I had a massless spring between them it would be extended, and I would gain energy. Beside the work done by gravity to bring down the objects (which I could collect and use to put them back where they started) I would now have extra energy – aha, another perpetual motion machine! So we better stick to the equivalence principle.
Given that boosting inertia makes matter both tend to shrink to denser states and have more gravitational force, an important worldbuilding issue is how far I will let this process go. Using it to help fission or fusion seems fine. Allowing it to squeeze matter into degenerate states or neutronium might be more world-changing. And easy making of black holes is likely incompatible with the survival of civilisation.
[ Still, destroying planets with small black holes is harder than it looks. The traditional “everything gets sucked down into the singularity” scenario is surprisingly slow. If you model it using spherical Bondi accretion you need an Earth-mass black hole to make the sun implode within a year or so, and a 3\cdot 10^{19} kg asteroid mass black hole to implode the Earth. And the extreme luminosity slows things a lot more. A better way may be to use an evaporating black hole to irradiate the solar system instead, or blow up something sending big fragments. ]
Another fun use of inertics is of course to mess up stars directly. This does not work with the energy addition/depletion model, but the velocity change model would allow creating a region of increased inertia where density ramps up: plasma enters the volume and may start descending below the spot. Conversely, reducing inertia may open a channel where it is easier for plasma from the interior to ascend (especially since it would be lighter). Even if one cannot turn this into a black hole or trigger surface fusion, it might enable directed flares as the plasma drags electromagnetic field lines with it.
The probe was invisible on the monitor, but its effects were obvious: titanic volumes of solar plasma were sucked together into a strangely geometric sunspot. Suddenly there was a tiny glint in the middle and a shock-wave: the telemetry screens went blank.
“Seems your doomsday weapon has failed, professor. Mad science clearly has no good concept of proper workmanship.”
“Stay your tongue. This is mad engineering: the energy ran out exactly when I had planned. Just watch.”
Without the probe sucking it together the dense plasma was now wildly expanding. As it expanded it cooled. Beyond a certain point it became too cold to remain plasma: there was a bright flash as the protons and electrons recombined and the vortex became transparent. Suddenly neutral the matter no longer constrained the tortured magnetic field lines and they snapped together at the speed of light. The monitor crashed.
“I really hope there is no civilization in this solar system sensitive to massive electromagnetic pulses” the professor gloated in the dark.
Model Pros Cons
Preserve kinetic energy Nice armour. Fast spacecraft with no energy needs (but weird momentum changes). Interplanetary dust is a problem. Inertics cannons inefficient. Toxic effects on biochemistry.
Preserve momentum Nice classical forcefield. Fast spacecraft with energy demands. Inertics cannons work. Potential for cool explosions due to overloads. Interplanetary dust drains batteries. Extremely weird issues of energy-debts: either breaking thermodynamics or getting altered inertia materials. Toxic effects on biochemistry. Breaks relativity.
Gravity manipulation No toxic chemistry effects. Fast spacecraft with energy demands. Inertics cannons work. Forcefields wimpy. Gravitic drives are iffy due to momentum conservation (and are WMDs). Gravity is more obviously hard to manipulate than inertia. Tidal edge forces.
In both cases where actual inertia is changed inertics fields appear pretty lethal. A brief brush with a weak field will likely just be incapacitating, but prolonged exposure is definitely going to kill. And extreme fields are going to do very nasty stuff to most normal materials – making them expand or contract, melt, change chemical structure and whatnot. Hence spacecraft, cannons and other devices using inertics need to be designed to handle these effects. One might imagine placing the crew compartment in a counter-inertics field keeping \mu=1 while the bulk of the spacecraft is surrounded by other fields. A failure of this counter-inertics field does not just instantly turn the crew into tuna paste, but into blue toxic tuna paste.
Gravity manipulation is cleaner, but this is not necessarily a plus from the cool fiction perspective: sometimes bad side effects are exactly what world-building needs. I love the idea of inertics with potential as an anti-personnel or assassination weapon through its biochemical effects, or “forcefields” being super-dense metal with amplified inertia protecting against high-velocity or beam impact.
The atomic rocket page makes a big deal out of how reactionless propulsion makes space opera destroying weapons of mass destruction (if every tramp freighter can be turned into a relativistic missile, how long is the Imperial Capital going to last?) This is a smaller problem here: being hit by a inertia-reduced freighter hurts less, even when it is very fast (think of being hit by a fast ping-pong ball). Gravity propulsion still enables some nasty relativistic weaponry, and if you spend time adding kinetic energy to your inertia-reduced missile it can become pretty nasty. But even if the reactionless aspect does not trivially produce WMDs inertia manipulation will produce a fair number of other risky possibilities. However, given that even a normal space freighter is a hypervelocity missile, the problem lies more in how to conceptualise a civilisation that regularly handles high-energy objects in the vicinity of centres of civilisation.
Not discussed here are issues of how big the fields can be made. Could we reduce the inertia of an asteroid or planet, sending it careening around? That has some big effects on the setting. Similarly, how small can we make the inertics: do they require a starship to power them, or could we have them in epaulettes? Can they be counteracted by another field?
Inertia-changing devices are really tricky to get to work consistently; most space opera SF using them just conveniently ignores the mess – just like how FTL gives rise to time travel or that talking droids ought to transform the global economy totally.
But it is fun to think through the awkward aspects, since some of them make the world-building more exciting. Plus, I would rather discover them before my players, so I can make official handwaves of why they don’t matter if they are brought up. |
f9e4602f1de597be | Thermodynamics and Energy
The World as Emergent from Pure Entropy
Authors: Alexandre Harvey-Tremblay
We propose a meta-logical framework to understand the world by an ensemble of theorems rather than by a set of axioms. We prove that the theorems of the ensemble must have *feasible* proofs and must recover *universality*. The ensemble is axiomatized when it is constructed as a partition function, in which case its axioms are, up to an error rate, the leading bits of Omega (the halting probability of a prefix-free universal Turing machine). The partition function augments the standard construction of Omega with knowledge of the size of the proof of each theorems. With this knowledge, it is able to decide *feasible mathematics*. As a consequence of the axiomatization, the ensemble additionally adopts the mathematical structure of an ensemble of statistical physics; it is from this context that the laws of physics are derived. The Lagrange multipliers of the partition function are the fundamental Planck units and the background, a thermal space-time, emerges as a consequence of the limits applicable to the conjugate pairs. The background obeys the relations of special and general relativity, dark energy, the arrow of time, the Schrödinger equation, the Dirac equation and it embeds the holographic principle. In this context, the limits of feasible mathematics are mathematically the same as the laws of physics. The framework is so fundamental that informational equivalents to length, time and mass (assumed as axioms in most physical theories) are here formally derivable. Furthermore, it can prove that no alternative framework can contain fewer bits of axioms than it contains (thus it is necessarily the simplest theory). Furthermore, it can prove that, for all worlds amenable to this framework, the laws of physics will be the same (hence there can be no alternatives). Thus, the framework is a possible candidate for a final theory.
Comments: 75 Pages.
Download: PDF
Submission history
[v1] 2017-05-18 09:56:01
[v2] 2017-05-18 12:57:31
[v3] 2017-05-23 14:09:54
[v4] 2017-05-26 07:05:40
[v5] 2017-07-11 20:36:09
[v6] 2017-07-12 07:20:57
[v7] 2017-07-21 13:53:03
[v8] 2017-07-22 05:29:56
[v9] 2017-07-23 13:00:22
[vA] 2017-07-24 06:58:20
[vB] 2017-07-25 12:50:01
[vC] 2017-07-25 19:07:22
[vD] 2017-07-26 07:05:11
[vE] 2017-07-26 08:04:32
[vF] 2017-08-14 10:34:44
[vG] 2017-08-16 20:21:03
[vH] 2017-09-11 12:00:42
[vI] 2017-09-12 09:43:06
[vJ] 2017-09-17 15:07:54
[vK] 2017-09-25 12:31:29
[vL] 2017-09-26 08:01:33
[vM] 2017-10-27 14:06:00
[vN] 2017-10-30 07:48:29
[vO] 2017-10-31 14:32:32
[vP] 2017-12-02 07:48:32
[vQ] 2017-12-04 17:39:53
[vR] 2017-12-06 09:52:30
[vS] 2017-12-07 09:15:01
[vT] 2017-12-20 14:14:14
[vU] 2017-12-21 13:47:09
[vV] 2017-12-22 13:45:38
[vW] 2017-12-27 11:31:06
[vX] 2017-12-28 08:38:31
Add your own feedback and questions here:
comments powered by Disqus |
6305b66e82f57a1e | söndag 30 augusti 2015
Quantum Information Can Be Lost
Stephen Hawking claimed in lecture at KTH in Stockholm last week (watch the lecture here and check this announcement) that he had solved the "black hole information problem":
• The information is not stored in the interior of the black hole as one might expect, but in its boundary — the event horizon,” he said. Working with Cambridge Professor Malcolm Perry (who spoke afterward) and Harvard Professor Andrew Stromberg, Hawking formulated the idea hat information is stored in the form of what are known as super translations.
The problem arises because quantum mechanics is viewed to be reversible, because the mathematical equations supposedly describing atomic physics formally are time reversible: a solution proceeding in forward time from an initial to a final state, can also be viewed as a solution in backward time from the earlier final state to the initial state. The information encoded in the initial state can thus, according to this formal argument, be recovered and thus is never lost. On the other hand a black hole is supposed to swallow and completely destroy anything it reaches and thus it appears that a black hole violates the postulated time reversibility of quantum mechanics and non-destruction of information.
Hawking's solution to this apparent paradox, is to claim that after all a black hole does not destroy information completely but "stores it on the boundary of the event horizon". Hawking thus "solves" the paradox by maintaining non-destruction of information and giving up complete black hole destruction of information.
The question Hawking seeks to answer is the same as the fundamental problem of classical physics which triggered the development of modern physics in the late 19th century with Boltzmann's "proof" of the 2nd law of thermodynamics: Newton's equations describing thermodynamics are formally reversible, but the 2nd law of thermodynamics states that real physics is not always reversible: Information can be inevitably lost as a system evolves towards thermodynamical equilibrium and then cannot be recovered. Time has a direction forward and cannot be reversed.
Boltzmann's "proof" was based an argument that things that do happen do that because they are "more probable" than things which do not happen. This deep insight opened the new physics of statistical mechanics from which quantum borrowed its statistical interpretation.
I have presented a different new resolution of the apparent paradox of irrreversible macrophysics based on reversible microphysics by viewing physics as analog computation with finite precision, on both macro- and microscales. A spin-off of this idea is a new resolution of d'Alemberts's paradox and a new theory of flight to be published shortly.
The basic idea here is thus to replace the formal infinite precision of both classical and quantum mechanics, which leads to paradoxes without satisfactory solution, with realistic finite precision which allows the paradoxes to be resolved in a natural way without resort to unphysical statistics. See the listed categories for lots of information about this novel idea.
The result is that reversible infinite precision quantum mechanics is fiction without physical realization, and that irreversible finite precision quantum mechanics can be real physics and in this world of real physics information is irreversibly lost all the time even in the atomic world. Hawking's resolution is not convincing.
Here is the key observation explaining the occurrence of irreversibility in formally reversible systems modeled by formally non-dissipative partial differential equations such as the Euler equations for inviscid macroscopic fluid flow and the Schrödinger equations for atomic physics:
Smooth solutions are strong solutions in the sense of satisfying the equations pointwise with vanishing residual and as such are non-dissipative and reversible. But smooth solutions make break down into weak turbulent solutions, which are only solutions in weak approximate sense with pointwise large residuals and these solutions are dissipative and thus irreversible.
An atom can thus remain in a stable ground state over time corresponding to a smooth reversible non-dissipative solution, while an atom in an excited state may return to the ground state as a non-smooth solution under dissipation of energy in an irreversible process.
2 kommentarer:
1. Just a short thought 'experiment' in section 6 above reading, "Boltzmann's "proof" was based an argument that things that do happen do that because they are "more probable" than things which do not happen.".
The way I read this, although I know it does not say it out loud, is "For it is statistically improbable to occur, thus it does not occur."
Hence, one never wins on the lottery for it is highly improbable one will win, yet sometimes someone wins.
I would love to see a mathematical notation as to how to interpret the sentence in the sixth section above, just for the fun of it.
2. Boltzmann's key argument is that things are likely to evolve from less probable states to more probabable states, thus giving time a direction from improbable to probable. But this is empty tautology as something being true by definition. It is self-evident that more probable states will trend to occur more frequently than less probable states. |
d20be326a8a75955 | You are here: Home Research Areas Feynman integrals
Document Actions
Feynman integrals
Feynman's Path Integrals Theory has a very special status in Mathematical Physics. On one hand, it is hard to find new ideas of Quantum or Statistical Physics, in the broad sense, which cannot be formulated more compactly and elegantly in these terms. On the other, it still makes little or no mathematical sense.
There are, however, mathematical counterparts of some aspects of Feynman’s approach. The first one, due to M. Kac, is the Euclidean version of Feynman's original representation of the solution of Schrödinger equation by a path integral, where the associated heat equation (with potential) is considered instead. The familiar relation between the free heat equation and Wiener measure is the key of this approach.
The second counterpart interprets Feynman's integral as an infinite dimensional oscillatory integral (Itô, Albeverio, Høegh-Krohn, ...). It allows, in particular, to preserve the intuition of the stationary phase method when Planck constant tends to 0. However, it is not a probabilistic approach.
At GFMUL we consider and study consequences of these two mathematical counterparts. In addition, we develop a special Euclidean interpretation of Feynman integrals, distinct from Kac's one. |
99da10a737137e27 | Neutron Interferometry: Lessons in Experimental Quantum Mechanics
by Helmut Rauch and Samuel A. Werner
448 pp. Clarendon Press, Oxford, 2000
Reviewed in American Journal of Physics by Mark P. Silverman
It is all too easy, when one reads standard textbooks of quantum mechanics, to focus so intently on abstract state vectors in Hilbert space or on mathematical techniques for solving the Schrödinger equation, that one loses track of (or perhaps never encounters) the fascinating experiments on real physical particles for which the principles of quantum mechanics are required. Neutron Interferometry helps motivate the theoretical side of quantum mechanical instruction by disclosing a world of experimental detail, centered on the neutron, that calls for and tests the principles of quantum theory. The book is not a textbook, but I have recently used it, together with my own book on the quantum interference of electrons (both free and bound in atoms), for instructive, thought-provoking examples - some for mathematical analysis, others for qualitative discussion - in a junior-senior level course of quantum mechanics.
The authors, who have, both independently and in collaboration, made pioneering contributions to neutron interferometry, begin with the analogy between neutron optics and light optics, and from there develop seminal concepts relating to coherence, diffraction, and interference. I find this approach congenial to my own way of teaching quantum mechanics, which, in brief, is to begin with the analogy to classical optics rather than with ties to classical mechanics. In this way, students may find that certain aspects of quantum mechanics are not as unintuitive as physics popularizers, or even quantum mechanics teachers, are wont to claim, if by "intuitive" one means the capacity to predict qualitatively the behavior of a system on the basis of past experience. With classical optics (instead of classical mechanics) as past experience, a variety of single-particle quantum phenomena, e.g. those involving step potentials, barriers, and wells, become reasonably intuitive.
Although quantum mechanics takes its name from the discreteness - quantization - of energy, angular momentum, and other dynamical observables, I believe a good case can be made (and I have made it elsewhere) that what distinguishes quantum mechanics most from classical mechanics is superposition and interference. If interference is to occur, then the superposing waves (or states) must exhibit some degree of coherence. Ironically, for all its fundamentality as the concept underlying quantum interference, I have not found many quantum mechanics textbooks in which the term coherence even appears in the index (apart, perhaps, from the topic of coherent oscillator states), let alone in the discussion of interference phenomena. Students all too frequently may be left with an erroneous impression that it is the de Broglie wavelength that sets the size scale for objects or apertures to give rise to interference effects. By contrast, Neutron Interferometry gives a thorough discussion of the important coherence parameters (longitudinal coherence length, transverse coherence length, coherence volume, coherence time, and so forth) that enter into an analysis of quantum interference, as well as experimental procedures for measuring these coherence parameters in the case of neutron beams.
For readers looking for satisfyingly detailed descriptions of quantum interference phenomena, Neutron Interferometry is a gold mine of illustrative examples. As in my own book whose title asserts that, contrary to Feynman's oft-quoted remark, there is more to the "mystery" of quantum mechanics than two-slit interference, Rauch and Werner outline the basic theory and experimental features of various inequivalent categories of quantum interference phenomena involving spin superposition, topological phase, gravitational and noninertial effects, nonlinearity of the Schrödinger equation, particle-antiparticle oscillations, quantum statistics, quantum entanglement, and much more. Some of these examples are experiments that have already been done (in fact, many years ago), and others are speculative experiments waiting for appropriate advances in technology. Because book reviews are expected to be reasonably brief, I will comment on only a few of the numerous experiments that have interested me most and which represent quantum interference phenomena conceptually different from the standard example of two-slit interference that one encounters most often in textbooks.
The Aharonov-Bohm (AB) effect is a quantum interference effect that depends on spatial topology and can be manifested only by particles endowed with electric charge. A split electron beam, for example, made to pass in field-free space around (and not through) a region of space within which is a confined magnetic flux, will, upon recombination, exhibit a flux-dependent pattern of fringes. Thus, by a judicious adjustment of the magnetic flux, one can produce an interference minimum in the forward direction, even though the optical path length difference of the two beam components is null. The electrons do not experience a magnetic field locally, and therefore are not acted upon by a classical Lorentz force. As neutral particles, neutrons do not exhibit what is traditionally regarded as the AB effect. However, neutrons have a magnetic moment and give rise to a companion topological phenomenon known as the Aharonov-Casher (AC) effect. In the latter, a split neutron beam is made to pass around a region of space within which is a confined electric charge and, upon recombination, gives rise to a charge-dependent interference pattern. The experimental confirmation of this effect, which may be interpreted as an example of spin-orbit coupling, was performed at the University of Missouri Research Reactor in 1991. Rauch and Werner summarize the theoretical interpretations and experimental features of the AC effect and its variants very well.
All particles, quantum as well as classical, are subject to the attractive force of gravity. In quantum mechanics, however, potential differences in the absence of classical forces can give rise to quantum interference effects (as just illustrated above in the case of a topological phase). In their book, the authors describe the so-called COW experiments (for Colella-Overhauser-Werner) in which a beam of neutrons, coherently split into two components moving parallel, but displaced vertically from one another, are recombined to yield an interference pattern that depends on the gravitational potential difference of the two beams. Here is an example where, ideally, the net work done by gravity on the two beams is the same, as well as is the optical path length difference of the two beams. There is a gravitationally-induced quantum interference in the absence of a net gravitational force. Quite by chance, I was lecturing on the COW experiments to my quantum mechanics class at about the time (2002) when the first experiments reporting the quantization of neutron energy states in a gravitational field were reported in Nature - an experiment that I hope will be included in the next edition of this book.
The AB, AC, and COW experiments are examples of single-particle self-interference. Among the entries in the chapter on "forthcoming and speculative experiments" is the neutron analogue of the optical Hanbury Brown-Twiss (HBT) experiments that demonstrated the correlated "wave noise" in chaotic light. From a quantum perspective, such correlations are known as photon bunching and represent a type of quantum interference attributable to the bosonic nature of the photon. Neutrons, however, like electrons, are fermions and are therefore governed by Fermi-Dirac statistics. A neutron HBT experiment would show a negative correlation or antibunching effect. In my own book I analyzed a variety of HBT experiments on free electron beams and had come to the conclusion that the degeneracy parameter of the most coherent field-emission electron sources available was marginally large enough for such experiments to be performed. (The degeneracy parameter is a measure of the mean number of electrons per cell of phase space.) The much lower (by orders of magnitude) degeneracy of known neutron sources led me to conclude that a neutron HBT experiment was virtually hopeless. Rauch and Werner point out, however, the very interesting possibility of obtaining correlated neutrons from the deuteron disintegration reactions D(n,p)2n and D(p-, g)2n, a proposition similar to my proposal of obtaining correlated electrons from the disintegration of the exotic ion m+e-e- (the muonic analogue of H-).
Throughout their book, the authors describe clearly and objectively the successful applications of quantum mechanics to neutron interferometry, eschewing philosophical digressions over such matters as the completeness or interpretation of the quantum mechanical formalism. In the final chapter, however, they give a comprehensive neutral summary of the principal positions that have emerged in answer to the epistemological questions: (a) What is the meaning of the wavefunction? (b) How is the measurement process described? (c) How can a classical world appear out of quantum mechanics? (d) How can non-locality be explained? That such questions remain after more than 75 years of extensive use and meticulous testing of quantum mechanics testifies to how odd the quantum world can be - a world humorously, and not inaptly, mirrored in the Charles Addams cartoon that decorates the cover of the book: the skier who in some mysterious way has left one ski track around each side of a tall pine tree.
About the author
|
ef6a376adea05e42 | The Fundamental Principles of the Universe and the Origin of Physical Laws
Download 189,91 Kb.
Date conversion16.03.2017
Size189,91 Kb.
1 2
The Fundamental Principles of the Universe and the Origin of Physical Laws
Grandpierre Attila
Konkoly Observatory
H-1525 Budapest, P. O. Box 67, Hungary
1. Introduction
Exploring the ontological structure of reality is a primary task of philosophy. Unfortunately, philosophy in the last thousand years seems to be largely awkward, suffering from fundamental self-inconsistencies, and so the fundamental ontological structure of reality is evaluated differently by different philosophies. Nevertheless, the discrepancies of different philosophies on the basic ontological categories seem to be not disparately unbridgeable. Moreover, modern science may offer a significant assistance by its more systematic approach especially since it seems that we have one science instead of many, which is a significant difference to the case of philosophy. Now since science is based on ontological presuppositions (Bunge, 1967, 291), a unique task may be specified: evaluating the ontological foundations of science. If the consideration leads to the result that the ontology of science is correct, we can find the ontological structure of reality what we need. Now if the consideration leads to another result, telling that the scientific ontology needs some improvements, in making these corrections we may arrive to a better understanding of the ontological structure of reality.
2. The concept of “ultimate reality”
It is advisable to formulate the basic concepts exactly. I regard that the ontological structure of reality is build up from some “ultimate reality”. In this work I use the following definition for the ultimate realities:
Definition 1: an existent is regarded as an ultimate reality, if it is autonomous and universal. A reality is regarded as autonomous, if it is not reducible to other realities. A reality is regarded as universal, if it extends to the whole Universe, if it is possible to show that its existence is not limited in space and time.
2.1 A historical account on the candidates for ultimate realities
What kind of factors may be regarded as ultimate realities? This question accompanies the whole development of thinking of mankind. The nature of the ultimate realities is related to the structure of the world, to the question that the world has one or many substances, layers, levels, and to the basic categories of sciences. The basic realities play a key role in every philosophical system and at the foundation of science. Therefore, it is important to present a short overview of the most important existents regarded by some as realities.
In the Chaldean Magic (Lenormant, 1999, 114) the first realities are the primal principles: “ILU, the First Principle, the universal and mysterious source of all things, which is manifested in the trinity of ANU, the god of Time and and the World; HEA, the intelligence, which animated matter; and BEL, the demiurgus and ruler of the organized universe”. In the ancient Hungarian world-system the basic categories were the first principle of the Universe, E’LET (the life-principle), and ILLAT (the principle of plant life), A’LLAT (the principle of animal life) and E’RTELEM (the principle of human life, reason). Later on, ancient Greeks preserved the more ancient notion of primal principles in the concept of “archi”. Chrysippus, the Stoic (possibly influenced by Scythian and Chaldean teachers) expressed the fundamental realities as: exis (the principle driving existence), physis (the principle driving plant life), psyche (the principle driving animal life), and nous (the principle driving human reason) (Zeller, 1865, 178; Erdmann, 1896, 174). In the Chinese universism (Glasenapp, 1975, 141) the sky-earth-man, moral-spiritual-physical, natural-historical-national categories are the fundamental ones. In the Rig Veda the spirit-life-matter, sky-living beings-earth divisions are made (Glasenapp, 1975). The Egyptian history of Creation (Eliade, 1976, 81) starts with the appearance of the earth (matter), light (energy), life and consciousness. The Indian Sankhya-system regards the universal principle of Spirit and Matter as fundamental (Kunzmann et al., 1991, 19). In the Western culture Thomas Aquinas applies three fundamental categories: that of God, spirit and matter; the material reality shows again a threefold structure of animal, plant and mineral kingdoms. Wolff (1730), after Goclenius (1613) and Micraelius (1652) who were the first using the term ontology, regarded that the three main class of existents are the psychic, cosmic and theos; this division was held also by Kant.
Nicolai Hartmann in his ontology (1949/1955, Section III) describes reality as building up from four levels: the cosmos, the organic realm, the realm of the soul and consciousness, and the spiritual-social world. In this world man is a material, organic, soulful and spiritual being exisitng in three basic forms of individual, nation and history. Mario Bunge (1980, 45) found that the totality of concrete entities may be grouped into five genera – “we may depict (on Fig. 2.1) the structure of reality as a pyramid: physical things - (bio)chemical systems - psycho(bio)systems - social systems - technical systems”. Medawar (1974) and, following him, Peacocke (1986, 17) divides the world into four levels as studied by physics, chemistry, biology, and ecology/sociology. “By 1993 Peacock had foliated the hierarchy into two dimensions: vertically it consists in four levels of increasing complexity (the physical world, living organisms, the behavior of living organisms, and human culture) while horizontally it depicts systems ordered by part-to-whole hierarchies of structural and/or functional organization (eg., in biology: macromolecules, organelles, cells, organs, individual organisms, populations, ecosystems). Peacocke’s analysis undoubtedly reflects the broad consensus of the scientific community” (Russell, 2000).
A certain confusion is observable in evaluating the structure of reality by the different authors. I think that one of the main reasons of this confusion is that the criteria on the basic building elements of reality, the ultimate realities are not formulated unambiguously. At the same time, one can observe remarkable agreements in the different categorisations, too. Moreover, the basic categorisation of sciences seems to follow closely the above found ultimate realities. Divisions like mathematics-astronomy-physics-biology-psychology-sociology or philosophy-natural science – social science show close similarities in structuring the world. Transparently, the main fundamental categories of existence are: material(physical) – biological (alive) – social – technical, physical-spiritual-moral, earthly-human-godlike (heavenly), natural-historical-national-individual. Now what counts as ultimate reality should be judged on the basis of systematic and thorough scientific investigations, by my proposal on the basis of our Definition 1. Therefore, to make the first step, we should consider the old and still unsolved question: is biology reducible to physics?
2.2. The relation of the metaphysical presuppositions of science with the reducibility question
Regarding that this first topic of our essay touches philosophy as well as science, especially the metaphysical presuppositions of science, we at first turn to science to see its position on the ontological structure of reality. Bunge (1967, Sect. 5.9) remarks that “philosophy is a part of the scaffolding employed in the construction of the finished scientific buildings…scientific research does presuppose and control certain important philosophical hypotheses”. Now let us revise shortly what are the most outstanding presuppositions of science by Bunge (1967, 291ff). We touch here only the first two of them. Firstly, he mentions “realism”, the “philosophical hypotheses that there is anything that exist independently from the cognitive subject”. Realism is based on the notion of factual truth, the hypothesis of the reality of facts, the “outer” nature of the facts, the separability of object of research from the inquiring subject. Moreover, the fifth element of “realism” as given by Bunge (1967, 291ff) is that “natural science, in contrast with prescientific views such as animism and anthropomorphism, does not account for nature in terms of typically human attributes, as it should if nature somehow depended on the subject. Thus, we do not account for the behavior of the object in terms of our own expectations or other subjective variables but, on the contrary, base our rational expectations on the objectively ascertainable properties of the object as known to us.” I have to note here that animism is not necessarily anti-scientific. On the contrary, William McDougall, a professor of psychology in Harvard University, wrote a whole book attempting to prove with the methods and aims of all empirical science that from all the thought systems of mankind it is just animism which is the closest to reality and that the conception of soul is indispensable to science (McDougall, 1920).
Now I think that this fifth element of the requirements of realism by Bunge needs more detailed ontological elaboration. If we accept the scientific ontology based on the classification of sciences as physics-biology-psychology/sociology (since I regard that man is a social being in her/his most basic foundation), how should we mean the term “objective”? In the context of “natural” sciences, in contrast with animism and anthropomorphism, the term “objectivity” indicates that we should ignore the ontological levels belonging to human existence. Now if we should also ignore the existence of any animating “psyche” and “spirit” that make the organisms animated, i.e. alive, together with consciousness, or nous, regarded as distinguishing ontological characteristics of human beings, what remains, is the mere inanimate matter. In this way Bunge’s realism seem to be a special, materialist one, since it is based on a “realism” requiring the ignorance of any other ontological levels. I found this requirement unnecessary, oversimplifying, and scientifically not valid.
Let us have another look to this point. The second outstanding philosophical hypothesis of scientific research by Bunge is pluralism: the multilevel structure of reality. “A second, related presupposition is that the higher levels are rooted in the lower ones, both historically and contemporaneously: that is, the higher levels are not autonomous but depend for their existence on the subsistence of the lower levels, and they have emerged in the course of time from the lower in a number of evolutionary processes. This rooting of the higher in the lower is the objective basis of the possibility of partially explaining the higher in terms of the lower and conversely…the principle of methodological reductionism is not to be confused with ontological reductionism or the denial of levels” (Bunge, 1967, 294). Unfortunately, Bunge did not specify the exact meaning of the terms he applied like “autonomous” and “ontological”. Anyhow, his stance expresses a non-reductive physicalism. In his concept, reality as a whole has a material, physical nature. Biology, relevant in a level of the material world, has some kind of autonomy, like chemistry has, but this autonomy has mostly a practical and not of principal significance. If materialist “realism” requires “desanimation” and “desanthropomorphism”, life is not based on its own ultimate principle but on physics. Bunge (1980, 217) expresses his view that “one can maintain that the mind is not a thing composed of lower level things – but a collection of functions or activities of certain neural systems that individual neurons presumably do not possess. And so emergentist (or systemic) materialism – unlike eliminative materialism – is seen to be compatible with overall pluralism.” Now the question is that what does Bunge mean on the term “ontological level” or ‘genera’. In the approach outlined here, Bunge’s ontological pyramid, although consists of five sub-levels, represents only one ontological level, and is pluralistic only within this one ontological level of emergentist materialism allowing materialistic sub-levels. Therefore, regarding ultimate realities as the basic constituents of the ontological structure of reality as a whole, we should evaluate the methodological reductionism of emergentist materialism as an ontological reductionism. Materialism is considered in this approach, as usual, as being a monism, and not a pluralism. Therefore, the claim of Bunge of ontological pluralism within the framework of materialist monism seems to us as controversial. Nevertheless, we will here work out a more detailed picture of physics and ontology. Before making it, we acknowledge about some of the main proponents of present-day science in ontological affairs.
The ontological reduction of biology to physics is one of the oldest and most significant problems of science and philosophy. Today, many eminent scientists expressed their opinions favouring physicalism. For example, Feynman stated that “today we cannot see if Schrödinger equation contains frogs, composers, and morality, or not” (Feynman, 1964, 12). The views prevailing at today’s universities and in handbooks on physics, as well as such influential best-sellers as Stephen Hawking’s The Brief History of Time, express the brute and dangerously antihuman materialistic view that human beings are mere material objects the behaviour of which will be exactly calculated by the soon coming Grand Unified Theory of physics. “Yet if there really is a complete unified theory, it would also presumably determine our actions” (Hawking, 1996, 13). Penrose (1989, 578) formulates his view in the following way: “…as I am suggesting, the phenomenon of consciousness depends upon this putative CQG (Correct Quantum Gravity theory)”. Moreover, physicalism seems to be dominating not only within physicists. As Bertalanffy (1969, 64) remarked, Williams (1966) articulated the common belief among biologists, expressed both in current teaching and in research, as “the theory of selection is based on the assumption that the laws of physical science plus natural selection can furnish a complete explanation for any biological phenomenon, and that these principles can explain adaptation in general and in abstract and any particular example of an adaptation”. Jacques Monod declares: “Anything can be reduced to simple, obvious, mechanical interactions. The cell is a machine; the animal is a machine; man is a machine” (Monod 1970/1974, ix). As Daniel Stoljar (2001) formulated: “Physicalism is the thesis that everything is physical, or as contemporary philosophers sometimes put it, that everything supervenes on the physical…Of course, physicalists don’t deny that the world might contain many items that at first glance don’t seem physical -- items of a biological, or psychological, or moral, or social nature. But they insist nevertheless that at the end of the day such items are wholly physical.” But if living organisms, the psychic phenomena, moral and social processes have wholly physical nature, this would mean that the laws of physics would govern live, psychic phenomena, moral decisions and social activity. Harvard Genetics Professor Richard Lewontin, a Marxist expressed his attitude in the followings (Johnson, 1997):We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counterintuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute…” In How the Mind Works, MIT professor Harold Pinker argues that the fundamental premise of ethics has been disproved by science. "Ethical theory," he writes, "requires idealisations like free, sentient, rational, equivalent agents whose behaviour is uncaused." Yet, "the world, as seen by science, does not really have uncaused events." In other words, “moral reasoning assumes the existence of things that science tells us are unreal” (Pearcey, 2000). These formulations demonstrate that in practice scientific materialism is a monist view ignoring completely the autonomy of any other ontological levels.
2.3. Anti-reductionist arguments
On the contrary, many people argued in favour of autonomy of biology, most of them I did not find convincing. But there are some people (Bauer, 1935; Polanyi, 1967) who recognised that the decisive point is the regulative mechanism of biology on the boundary conditions of physics. Augros and Stanciu (1987, 31) takes the stance “All the properties of the organisms we have discussed so far – its astonishing unity, its capacity to build its own parts, its increasing differentiation through time, its power of self-repair and self-regeneration, its ability to transform other materials into itself, and its incessant activity – all these not only distinguish the living being from the machine but also demonstrate its uniqueness amid the whole of nature…The organism is sui generis, is a class by itself”. “For these features we have no analogue in inorganic systems…mechanistic modes of explanation are in principle unsuitable for dealing with certain features of the organic; and it is just these features which make up the essential peculiarities of the organisms” (Bertalanffy, 1962, 108). Pattee (1961) noted that “We find in none of the present theories of replication and protein synthesis any interpretation of the origin of the genetic text which is being replicated, translated and expressed in functional proteins, nor do they lead to any understanding of the relation between particular linear sequences or distributions of subunits in nucleic acid and proteins, and the specific structural and functional properties which are assumed to result entirely from these linear sequences”. Bertalanffy (1969, 68-69) remarks that “According to Pattee (1961), the order of biological macromolecules is not adequately explained as an accumulation of genetic restrictions via selection, but replication presupposes well-ordered rather than random sequences. Thus there are principles of “self-organisation” at various levels which require no genetic control. Immanent laws run through the gammut of biological organizations.”
For our understanding the question of the reducibility of biology to physics I found one of the most informative the approach worked out by Polanyi (1967). He called attention to the fact that “machines seem obviously irreducible…They do not come into being by physical-chemical equilibration, but are shaped by man. They are shaped and designed for a specific purpose…Only the principles underlying the operations of the watch in telling the time could specify your invention of the watch effectively, and these cannot be expressed in terms of physical-chemical variables…Nothing is said about the content of a book by its physical-chemical topography. All objects conveying information are irreducible to the terms of physics- and chemistry…The laws of inanimate nature operate in a machine under the control of operational principles that constitute (or determine) its boundaries. Such a system is clearly under a dual control…Any chemical or physical study of living things that is irrelevant to the working of the organism is no part of biology, just as the chemical and physical studies of a machine must bear on the way the machine works, if it is to serve engineering…Biological principles are seen then to control the boundary conditions within which the forces of physics and chemistry carry on the business of life. This dual action of a system is said to work by the principle of boundary control...such shaping of boundaries may be said to go beyond a mere “fixing of boundaries” and establishes a “controlling principle”…it puts the system under the control of a non-physical-chemical principle by a profoundly informative intervention…he question is whether or not the logical range of random mutations includes the formation of novel principles not definable in terms of physics and chemistry. It seems very unlikely that it does include it”. It is clear that the dual nature of machines and organisms mean that the biological principle governs the behaviour of the organism and so the principles of physics and chemistry. Therefore, they necessarily represent a higher ontological level than physics does and they are not reducible to physics.
2.4. The ultimate principle of physics
For a more complete comprehension of the reducibility question, a further step towards enlightening the concept of ultimate reality becomes necessary. I think that the most easy to recognise the ultimately pluralistic nature of reality at the ultimate level. It seems that the discussions and considerations on the emergentist view and its ontological character may last for undetermined times if one does not apply for an analysis at the ultimate level. Actually, it is possible to grasp the most essential and actually ultimate, universal and autonomous element of materialism with the help of physics. The most general statement on what physics is based is the recognition that in the physical approach “all process tend towards the physical equilibrium”. All the equation of motions expresses the fact that in reality physical systems are driven towards the physical equilibrium. Closed physical systems move towards the physical equilibrium in the most efficient way, and reach it as soon as possible. The stone in free fall moves towards its physical equilibrium without any deviation. When the falling of the stone is not completely free, since aerodynamical drag and winds are in action, the stone will follow a path in which it will reach the physical equilibrium as soon as possible within its actual conditions. This recognition is expressed in the ultimate principle of physics, the action-principle or the principle of Least Action (Landau, Lifschitz 1959, 12). “A minimal requirement for respectability of a physical theory seems to be that it admit a variational principle” (Edelen, 1971, 17). The ultimate variational principle of physics, the action principle is at the apex of physics and summarises in an elegant form the laws of motion. Therefore, the action principle may be regarded as the ultimate basis of physics. Although not all part of physics is covered by the action principle, its most significant parts does. Moreover, the remaining cases do not challenge the general tendency that physical processes tend always towards physical equilibrium. It seems to be proper to refer to the general tendency of physical systems to be driven towards physical equilibrium through the context of the action principle. We may regard the action principle as being the ultimate principle of matter (and physics). We may use this recognition for formulating an exact notion of matter:
Definition 2: Material behaviour is shown only when a process follows the laws of physics and only the laws of physics. Material behaviour is ultimately determined by the action principle of physics.
3. The solution of the question: is biology reducible to physics?
3. 1. The life principle of Ervin Bauer and the question of reducibility
By my evaluation, the most thorough, systematic, insightful foundational work of theoretical biology, which is at the same time also explicitly articulated in mathematical formulations is that of Ervin Bauer (1920, 1935/1967). It is hard to evaluate the real significance of his work, and its marginal influence to the present-day science seems to be rooted largely in historical circumstances and in the ignorance of dominant materialism. Ervin Bauer was born (1890) and educated in Hungary. He has been working in the most productive period of his life (1925-1937) in Soviet Union, in Moscow and Leningrad. He became arrested and jailed in prison in 1937 and died as a victim of Stalin’s massacres in 1942 (Tokin, 1963/1965, 11-26).
In his main work “Theoretical Biology” (1935/1967) he formulated the key requirements of living systems. The first requirement is that “the living system is able to change in a constant environment, it has potential energies available to work”. His second requirement tells that a living system acts against the physical and chemical laws and modifies its inner conditions. His third, all-inclusive requirement of living systems tells that “The work made by the living system, within any environmental conditions, acts against the realisation of that equilibrium which would set up on the basis of the initial conditions of the system in the given environment by the physical and chemical laws” (Bauer, 1967, 44). This third requirement does not contradict to the laws of physics since the living system has some internal equipment, the use of which may modify the final state reached from the same initial state in the same environment. “The fundamental and general law of the living systems is the work made against the equilibrium, a work made on the constituents of the system itself” (ibid., 48).
Definition 3. Bauer formulates the universal law of biology in the following form: “The living and only the living systems are never in equilibrium, and, supported by their free energy reservoir, they are continuously invest work against the realisation of the equilibrium which should occur within the given outer conditions on the basis of the physical and chemical laws” (ibid., 51).
“One of the most spectacular and substantial difference between machines and living systems is that in the case of machines the source of the work is not related to any significant structural changes. The systemic forces of machines does work only if the constituents of the machine are taken into motion by energy sources which are outer to these constituents. The inner states of the constituents of a machine remain practically constant. The task of the constituents of a machine is to convert some kind of energy into work. In contrast, in the living systems the energy of the internal build-up, of the structure of the living matter is transformed into work. The energy of the food is not transformed into work, but to the maintenance and renewal of their internal structure and inner states. Therefore, the living systems are not power machines” (ibid., 64). The fundamental principle of biology acts against the changes which would set up in the system on the basis of the Le Chatelier-Braun principle (ibid., 59). The Bauer-principle recognises the problem of the forces acting at the internal boundary surfaces as the central problem of biology. “Modern physiology attributes all the potential differences to the characteristics of phase boundaries or the membranes, i.e. to the conditions prevailing at the internal boundary surfaces” (ibid., 85). The potential differences and the biological modification of the internal boundary conditions are in close relation with the molecular structure of the living matter. In the living state the living molecules show a characteristic elongation, a deformation which is related to electric polarisation and magnetism. The primary significance of bioelectromagnetism in the biological organisation is recognised as well: “if, due to the higher potential of the living matter, assimilation overcomes dissimilation, as it does in the embrional textures, breeding summits, the lattice structure is more deformed, shifted from the equilibrium and therefore such a locus obtains a positive charge. If some stimuli disturbs the processes of assimilation, and therefore also the maintenance of the inequilibrium structure and so the structural energy decreases, the structure becomes closer to the equilibrium and in such a locus a negative wave will develop. Now if the texture dies away, an equilibrium lattice structure will develop, and this place will have a negative charge in comparison to the living parts of the texture” (ibid., 87).
Now Definition 2 and 3 is very useful when evaluating the level of biology if it represents or not an autonomous ontological level irreducible to the physical principle. If new treats emerge on the development or complexification of a system, these emergent characteristics may still belong to the realm of physics. Emergent materialism is a monist ontology based on the belief that physical principles may trigger processes that determine the development of emergent processes, including the living processes, too. With the use of Definitions 1, 2 and 3 I show here that the concept of emergent materialism in the biological context is based on a false belief. The material behaviour (Definition 2) tends towards the physical equilibrium. The biological behaviour is governed by the life-principle (Definition 3) which acts just against the material behaviour. It can do this only by a proper modification of the boundary conditions of the physical laws. The biological modification of the (internal) boundary conditions of (living) organism is behind the realm of physics. The biological activity acts on the degrees of freedom that are not active in the material behaviour. Therefore, we found a gap between the realms of physics and biology. If the biological principle is active, because the conditions of its activity (a certain amount of complexity, suitable material structures, energies etc.) are present, it realises a thorough and systematic modification of internal boundary conditions of living organisms. In comparison, in an abstracted organism in which the biological principle is not active, the same internal boundary conditions would be not modified, and so the organism should fall towards physical equilibrium. In principle, it would be possible to fill the gap with processes in which the biological modification is not realised in a rate necessary to govern the physical processes. In practice, such intermediate processes are strongly localised in space and time, and the ontological gap is maintained by the continuous and separate actions of the physical and biological principles. This formulation offers us an unprecedented insight into the ultimate constituent of reality. Using the newly found formulation of the ultimate principle of matter, our Definition 1 may be formulated in a more exact manner:
Definition 1’: any existent is regarded as an “ultimate reality”, if it is based on a universal and ontologically irreducible ultimate principle.
Now if biology is based on an ultimate principle different and independent from the physical principle, this should mean that biology is not reducible to physics. If the principle of life did not exist as a separate and independent principle from physics, then the accidentally starting biological processes would, after a short period, quickly decline towards the state of equilibrium, towards physical "equilibrium death" (here we generalise the concept of "heat death" including not only thermodynamic equilibrium). But as long as biological laws are irreducible to physical ones, the tendency towards physical equilibrium due to the balancing tendencies of the different physico-chemical gradients cannot prevail, for they are overruled by the impulses arising from the principle of life. The main point is that the biological impulses has a nature which elicits, maintains, organise and cohere the processes which may otherwise set up only stochastically, transiently, unorganisedly and incoherently when physical principles are exclusive.
The essential novelty of the biological phenomenon therefore consists in following a different principle, which is able to govern the biological phenomena even when the physical principles keep their universal validity. Until a process leads to a result that is highly improbable by the laws of physics, it may be still a physical process. But when many such extremely improbable random process is elicited, and these extremely improbable events are co-ordinated in a way that together they follow a different ultimate principle which makes these processes a stable, long lifetime, lawful process, then we met with a substantial novelty which cannot be reduced to a lower level principle.
An analogy may serve to shed light to the way of how biology acts when compared to physics. It is like Aikido: while preserving the will of the attacker and modifying it using only the least possible energy, we get a result that is directly the opposite of the will of the attacking opponent. It is clear that the ever-conspicuous difference between living beings and seemingly inanimate entities lies in the ability of the former to be spontaneously active, to alter their inner physical conditions according to a higher organising principle in such a way that the physical laws will launch processes in them with an opposite direction to that of the "death direction" of the equilibrium which is valid for physical systems. This is the Aikido principle of life. A fighter practising the art of Aikido does not strive after defending himself by raw physical force, instead he uses his skill and intelligence to add a small power impulse, from the right position, to the impetus of his opponent’s attack, thus making the impetus of the attacker miss its mark. Instead of using his strength in trying to stop a hand coming at him, he makes its motion faster by applying some little technique: he pulls on it. Thus, applying little force, he is able to suddenly upset the balance of the attack, to change it, and with this to create a situation advantageous for him.
The Aikido principle of life is similar to the art of yachting. There, too, great changes can be achieved by investing small forces. As the yachtsman, standing on board the little ship, makes a minute move to shift his weight from one foot to the other, the ship sensitively changes its course. Shifting one’s weight requires little energy, yet its effect is amplified by the shift occurring in the balance of the hull. Control is not exerted on the direct surface physical level, but on the level of balance; it is achieved via altering balance in a favourable direction that against much larger forces, the effect of very small forces prevails. However, being able to alter balance in a favourable direction presupposes a profound (explicit or implicit) knowledge of contributing factors, also the attitude and ability to rise above direct physical relations, as well as the ability to independently bring about the desired change. If life is capable of maintaining another "equilibrium of life", by a process the direction of which is contrary to the one pointing towards the physical equilibrium, then the precondition of life is the ability to survey, to analyse, and to spontaneously, independently and appropriately control all the relevant physical and biological states. Thus, indeed, life cannot be traced back to the general effect of the "death magnet" of physical equilibrium and mere blind chance that are the organisation factors available for physics. The principle of life has to be acknowledged as an ultimate principle which is at least as important as the basic physical principle, and which involves just the same extent of “objectivity” as the physical principle. If it is a basic feature of life that it is capable of displaying Aikido-effects, then life has to be essentially different from the inanimateness of physics, just as the principle of the behaviour of the self-defending Aikido disciple is different from the attacker’s one. Thus in the relationship of the laws of life and those of physics, two different parties are engaged in combat, and the domain of phenomena of two essentially different basic principles are connected. Practising the art of Aikido is possible only when someone recognise and learn the principle and practice of Aikido. Now regarding the origin of the principle of Aikido, it results from the study of the art of fight. Regarding the origin of the principle of biology, it cannot result from the physical laws by a physical principle, since the ultimate principle of physics acts just the contrary to the life principle. Therefore, the life principle shows up as an independent ultimate principle above the realm of physics.
3.1. The governance of organism and the reducibility of biology to physics
It is well known that the human body consists of ~1015 cells. Now in each cell regulative processes occur in a number around 105 per second. Physical laws are active on our body and influence the activity of our cells, acting to produce energy dissipation and increase of disorder. Most of the chemical regulative reactions have a vital significance. Therefore, they should necessarily and inevitably occur in a highly coherent way in order to fulfil the vital needs of the organism (and to realise the conscious decisions). The regulation of the physico-chemical processes cannot occur on the basis of physics and chemistry, since it is just the physico-chemical processes that have to be submitted to a higher regulative principle in order to reach macroscopic coherency. Moreover, it would be impossible to realise such a detailed regulation of reactions having a practically cosmic multitude on a physical basis. For a physical regulative factor, all the 1020 reactions/sec of the cells should be observable simultaneously. Although the body may be transparent for electromagnetic (EM) fields, the behaviour of the global organismic EM fields seem to be fixed as belonging to the exclusive realm of physics. How is it possible that living organisms can act reasonably, and a thirsty horse can find the river without any confusion of the inner chemical reactions of her/his organism?
Let us introduce a simplifying picture and exemplify our stance on it. The organism is represented by a house, the cells by its rooms. Now the action of the life principle is on the doors and windows. The life principle is free to act on the doors and window since an energy reservoir in the cellar is connected to all the windows and doors. Now the molecular events of the air happening in the house are substantially determined by the movements of doors and windows. We became accustomed to the material terms and perceive mostly the presence of walls. Therefore, it may seem mysterious and incredible that the flow of air is determined not by the walls but by some additional, subtle factor regulating the workings of doors and windows. Nevertheless, we cannot reach by any kind of movements of any kind of outer walls the state in which there is always fresh air within the house. This means that life (fresh air) and biological action (regulation of the position of doors and windows) is not reducible to the positions of walls (physics). Actually, biological regulation is enormously more economic than any physical one.
Now how can we understand the nature of these “doors” and “windows” and the factor regulating their positions? I think their nature is related to the high complexity of living matter and to the ultimate principle of biology. The high complexity is needed to the appearance of unused degrees of freedom. Actually, every elementary particle or atom have (at least) 3 degrees of freedom, related to the 3 possible directions of their spatial translational motion. When two-atom molecules are formed, additional degrees of freedom are introduced, related to their rotational asymmetries. Two rotational degree of freedoms appear related to the possible orientations of the rotational axis. As more and more complex compounds appear, new, additional degrees of freedom show up. Already the increase of the number of constituents in itself increases the degrees of freedom. For example, the Sun has a practically infinite degree of freedom, since it consists of more than 1056 particles. Now the bullet in a pistol does not have any degree of freedom once the shot occurred. This situation is due to the constraints of the wall of the pistol. Mechanical constraints in the living cell do not extend to all degrees of freedom and therefore with the increase of the number and/or complexity of the compounds, the number of the unused, free degrees of freedom grows.
Unfortunately, the fact that with the growing of complexity the unused degrees of freedom grows enormously, seems to be not widely recognised. For example, in a critic of Polanyi’s non-physico-chemical organising principle Hull (1974, 139) objects: “The only candidate for Polanyi’s ordering principle that originated life is the bounding properties of the chemical elements”. This objection is completely invalid for molecules with a significant complexity. As a matter of fact, with the growing degree of complexity the bounding properties of the chemical elements leave more and more degrees of freedom unconstrained, especially when the molecules may be bound in all the three dimensions and many spatial variations become possible. Cyrus Levinthal pointed out at the end of the nineteensixties that even for a small protein molecule, consisting of only 100 amino acids, each having 4 different possible positions in the protein molecule, the number of the possible configurations is around 1060. Assuming that this protein molecule wants to reach a different state, e.g. one of the states with the minimum energy, it would need 1030 times the lifetime of the Universe if a physical mechanism play a role of “active information” oscillating with a frequency of 1013 Hz. But it was observed that protein molecules normally find other configurations within hours, sometimes within a millisecond. This contradiction is known as the Levinthal paradox. There is no known solution for this paradox (Callender et al., 1994). The proposal this work makes is that such unconstrained degrees of freedom do not need a physical mechanism to organise them since they may be coupled directly to the universal principle of life. Therefore, the life principle supplies the regulating agency in the form of “biological constraints” and the free opening-closing of doors and windows are related to the “biological degrees of freedom”. In this way, the material hardware of the house is represented by physics, and the infrastructure or software of the house is represented by the life principle. Now since the infrastructure or software is not determined by the hardware, the widely held view that biology should be reducible to physics should be revised.
3.2. A Possibly Useful Cosmic Criteria of life and The Sun as a Living Being
Bunge (1985, 4) remarked that: “Students of life become interested in a definition of the concept of life during their freshman year and at the end of their career. In between they are discouraged from trying to elucidate that concept and, in general, from getting involved in philosophical questions. They are encouraged instead to “get on with their business”, which supposedly is anything but trying to understand life…Here we need to recall only some of the properties deemed jointly necessary and sufficient for a thing to be alive. They are metabolism, multiplication, heredity, and variability.” In searching a general criterion of life that is not fixed to terrestrial life forms, I find the above life-criteria overspecialised. All the above-indicated properties of life are more symptoms of life instead of expressing the basic and necessary condition. I regard sensitivity as the real fundamental specification of life. Without sensitivity life would not be worthwhile to be lived. Sensitivity is based on our connections with our internal and outer sources of information. Without having access to any source of information life is not possible. Sensitivity also includes a specific relation to the incoming outer and inner information. This specific relation has to be manifested in an unequal attitude to the information, in dependence on the content of information. Many of the incoming information may be found irrelevant, and only some information may be important. Life in its essence is an activity related to a selection between “important” and “irrelevant” information, and more than that: it also includes an activity expressing the result of the processed information. It is possible to imagine an existent living in the world of information, whose “life” is generated by its “psychic” activity in which she/he selects and processes the selected information. Nevertheless, such an existence would be at most a psychic life, without actual, material life. If we want a real, manifested life, we need actions based on processed information. And, as usual, it is this last step of the life-chain that is the easiest to observe, measure and determine for an outer observer. This last step of manifestation needs a material organisational activity embodied in the bodily world, and certain amplification is a necessarily requisite of the life-process. A material organism has a kind of energetics, and its energetics has to be coupled to the processes occurring in the information level. Therefore, inevitably the material processes have to be governed towards the action corresponding to the processed information (“decisions”).
1 2
The database is protected by copyright © 2016
send message
Main page |
93cba2009084238f | From Wikiquote
Jump to navigation Jump to search
An Experiment on a Bird in an Air Pump
by Joseph Wright of Derby
Vacuum pump and
Bell jar chamber
• If Dirac’s idea restores the stability of the spectrum by introducing a stable vacuum where all negative energy states are occupied, the so-called Dirac sea, it also leads directly to the conclusion that a single-particle interpretation of the Dirac equation is not possible.
• Luis Álvarez-Gaumé, Miguel Á. Vázquez-Mozo, An Invitation to Quantum Field Theory (2012) Ch. 1 : Why Do We Need Quantum Field Theory After All?
• John D. Barrow, The Book of Universes: Exploring the Limits of the Cosmos (2011)
• These rays, as generated in the vacuum tube, are not homogeneous, but consist of bundles of different wave-lengths, analogous to what would be differences of colour could we see them as light. Some pass easily through flesh, but are partially arrested by bone, while others pass with almost equal facility through bone and flesh.
• Andrew Hamilton, "Brains that Click", Popular Mechanics 91 (3), March 1949, (pp. 162 et seq.) p. 258.
• With respect to the ultimate constitution of... masses, the same two antagonistic opinions which had existed since the time of Democritus and of Aristotle were still face to face. According to the one, matter was discontinuous and consisted of minute indivisible particles or atoms, separated by a universal vacuum; according to the other, it was continuous, and the finest distinguishable, or imaginable, particles were scattered through the attenuated general substance of the plenum. A rough analogy to the latter case would be afforded by granules of ice diffused through water; to the former, such granules diffused through absolutely empty space.
• The real value of the new atomic hypothesis... did not lie in the two points which Democritus and his followers would have considered essential—namely, the indivisibility of the 'atoms' and the presence of an interatomic vacuum—but in the assumption that, to the extent to which our means of analysis take us, material bodies consist of definite minute masses, each of which, so far as physical and chemical processes of division go, may be regarded as a unit—having a practically permanent individuality. ...that smallest material particle which under any given circumstances acts as a whole.
• In general, the rate of evaporation (m) of a substance in a high vacuum is related to the pressure (p) of the saturated vapor by the equation Red phosphorus and some other substances probably form exceptions to this rule.
• Irving Langmuir, "The Constitution and Fundamental Properties of Solids and Liquids. Part I. Solids" (September 5, 1916) Journal of the American Chemical Society
• All those who maintain a vacuum are more influenced by imagination than by reason. When I was a young man, I also gave in to the notion of a vacuum and atoms; but reason brought me into the right way. ...The least corpuscle is actually subdivided in infinitum, and contains a world of other creatures, which would be wanting in the universe, if that corpuscle was an atom, that is, a body of one entire piece without subdivision. In like manner, to admit a vacuum in nature, is ascribing to God a very imperfect work... space is only an order of things as time also is, and not at all an absolute being. ...Now, let us fancy a space wholly empty. God could have placed some matter in it, without derogating in any respect from all other things: therefore he hath actually placed some matter in that space: therefore, there is no space wholly empty: therefore all is full. The same argument proves that there is no corpuscle, but what is subdivided. ...there must be no vacuum at all; for the perfection of matter is to that of a vacuum, as something to nothing. And the case is the same with atoms: What reason can any one assign for confining nature in the progression of subdivision? These are fictions merely arbitrary, and unworthy of true philosophy. The reasons alleged for a vacuum, are mere sophisms.
• Dennis Overbye, Lonely Hearts of the Cosmos (1992) Ref: Edward P. Tryon, "Is the Universe a Vacuum Fluctuation?" Nature (Dec 14, 1973); Robert H. Dicke, Jim Peebles, "The Big Bang Cosmology—Enigmas and Conundrums," Nature (1979) Also see False vacuum.
• When the Higgs field froze and symmetry broke, Tye and Guth knew, energy had to be released... Under normal circumstance this energy went into beefing up the masses of particles like the weak force bosons that had been massless before. If the universe supercooled, however, all this energy would remain unreleased... according to Einstein, it was the density of matter and energy in the universe that determined the dynamics of space-time. ...The issue of vacuum energy had been a tricky problem for physics ever since Einstein. According to quantum theory, even the ordinary "true" vacuum should be boiling with energy—infinite energy... due to the the so-called vacuum fluctuations that produced the transient dense dance of virtual particles. This energy... could exert a repulsive force on the cosmos just like the infamous cosmological constant... quantum theories had reinvented it in the form of vacuum fluctuations. The orderly measured pace of the expansion of the universe suggested strongly that the cosmological constant was zero, yet quantum theory suggested it was infinite. Not even Hawking claimed to understand the cosmological constant problem... a trapdoor deep at the heart of physics.
• I have endeavoured to attain this end (viz. the production of a vacuum in the cylinder) in another way. As water has the property of elasticity, when converted into steam by heat, and afterwards of being so completely recondensed by cold, that there does not remain the least appearance of this elasticity, I have thought that it would not be difficult to work machines in which, by means of a moderate heat and at a small cost, water might produce that perfect vacuum which has vainly been sought by means of gunpowder.
• Denis Papin, Recueil de diverses Pièces touchant quelques nouvelles Machines (1695) p. 53 as quoted by Dionysius Lardner, The Steam Engine Explained and Illustrated (1840) pp. 45-46.
• His reluctance to pay for elaborate or expensive equipment, perhaps the result of an impoverished childhood, had established the legendary "sealing wax-and-string" tradition of the Cavendish, where everyday materials were ingeniously used to make and patch up experimental equipment, with sealing wax proving particularly useful for vacuum seals.
• Dianna Preston, Before the Fallout from Marie Curie to Hiroshima (2005).
• The first machine of Papin was very similar to the gunpowder-engine... of Huyghens. In place of gunpowder, a small quantity of water is placed at the bottom of the cylinder, A; a fire is built beneath it, "the bottom being made of very thin metal," and the steam formed soon raises the piston, B, to the top where a latch, E, engaging a notch in latch engaging the piston rod, H, holds it up until it is desired that it shall drop. The fire being removed, the steam condenses, and a vacuum is formed below the piston, and the latch, E, being disengaged, the piston is driven down by the superincumbent atmosphere and raises the weight which has been, meantime, attached to a rope... passing from the piston rod over pulleys... The machine had a cylinder two and a half inches in diameter, and raised 60 pounds once a minute; and Papin calculated that a machine of a little more than two feet diameter of cylinder and of four feet stroke would raise 8,000 pounds four feet per minute—i.e., that it would yield about one horse-power.
Thomas Savery's 'Miner's Friend' steam-driven water pump
Fig.2 from Thomas Tredgold,
The Steam Engine..
• In June, 1699, Captain Savery exhibited a model of his engine before the Royal Society, and the experiments he made with it succeeded to their satisfaction. ...One of the steam vessels being filled with steam, condensation was produced by projecting cold water, from a small cistern E, against the vessel; and into the partial vacuum made by that means, the water, by the pressure of the atmosphere, was forced up the descending main D, from a depth of about twenty feet...
• [E]xperiments with a simple little machine, designed to mimic certain elementary features of animal behavior... Consisting only of two vacuum tubes, two motors, a photoelectric cell and a touch contact, all enclosed in a tortoise-shaped 'shell, the model was a species of artificial creature which could explore its surroundings and seek out favorable conditions. It was named Machine speculatrix.
• There is one topic I was not sorry to skip: the relativistic wave equation of Dirac. It seems to me that the way this is usually presented in books on quantum mechanics is profoundly misleading. Dirac thought that his equation was a relativistic generalization of the non-relativistic time-dependent Schrödinger equation that governs the probability amplitude for a point particle in an external electromagnetic field. For some time after, it was considered to be a good thing that Dirac’s approach works only for particles of spin one half, in agreement with the known spin of the electron, and that it entails negative energy states, states that when empty can be identified with the electron’s antiparticle. Today we know that there are particles like the [W bosons] that are every bit as elementary as the electron, and that have distinct antiparticles, and yet have spin one, not spin one half. The right way to combine relativity and quantum mechanics is through the quantum theory of fields, in which the Dirac wave function appears as the matrix element of a quantum field between a one-particle state and the vacuum, and not as a probability amplitude.
Pneumatica (c. 50)[edit]
Hero of Alexandria, as quoted in The Pneumatics of Hero von Alexandria (1851) Tr. Bennet Woodcroft, unless otherwise noted.
• Some assert that there is absolutely no vacuum; others that, while no continuous vacuum is exhibited in nature, it is to be found distributed in minute portions through air, water, fire and all other substances: and this latter opinion, which we will presently demonstrate to be true from sensible phenomena, we adopt.
• Vessels which seem to most men empty are not empty, as they suppose, but full of air. Now the air, as those who have treated of physics are agreed, is composed of particles minute and light, and for the most part invisible. If, then, we pour water into an apparently empty vessel, air will leave the vessel proportioned in quantity to the water which enters it. This may be seen from the following experiment. Let the vessel which seems to be empty be inverted, and, being carefully kept upright, pressed down into water; the water will not enter it even though it be entirely immersed: so that it is manifest that the air, being matter, and having itself filled all the space in the vessel, does not allow the water to enter. Now, if we bore the bottom of the vessel, the water will enter through the mouth, but the air will escape through the hole. Again, if, before perforating the bottom, we raise the vessel vertically, and turn it up, we shall find the inner surface of the vessel entirely free from moisture, exactly as it was before immersion.
• The particles of the air are in contact with each other, yet they do not fit closely in every part, but void spaces are left between them, as in the sand on the sea shore: the grains of sand must be imagined to correspond to the particles of air, and the air between the grains of sand to the void spaces between the particles of air. Hence, when any force is applied to it, the air is compressed, and, contrary to its nature, falls into the vacant spaces from the pressure exerted on its particles: but when the force is withdrawn, the air returns again to its former position from the elasticity of its particles, as is the case with horn shavings and sponge, which, when compressed and set free again, return to the same position and exhibit the same bulk.
Cupping Vessel
Ancient Egypt
• Thus, if a light vessel with a narrow mouth be taken and applied to the lips, and the air be sucked out and discharged, the vessel will be suspended from the lips, the vacuum drawing the flesh towards it that the exhausted space may be filled. It is manifest from this that there was a continuous vacuum in the vessel. The same may be shown by means of the egg-shaped cups used by physicians, which are of glass, and have narrow mouths. When they wish to fill these with liquid, after sucking out the contained air, they place the finger on the vessel's mouth and invert them into the liquid; then, the finger being withdrawn, the water is drawn up into the exhausted space, though the upward motion is against its nature. Very similar is the operation of cupping-glasses, which, when applied to the body, not only do not fall though of considerable weight, but even draw the contiguous matter toward them through the apertures of the body.
• They... who assert that there is absolutely no vacuum may invent many arguments on this subject, and perhaps seem to discourse most plausibly though they offer no tangible proof. If, however, it be shewn by an appeal to sensible phenomena that there is such a thing as a continuous vacuum, but artificially produced; that a vacuum exists also naturally, but scattered in minute portions ; and that by compression bodies fill up these scattered vacua, those who bring forward such plausible arguments in this matter will no longer be able to make good their ground.
Robert Grosseteste (title translates as Commentary on Aristotle's 8 Books of Physics)
Pascal's Life, Writings, and Discoveries (1844)[edit]
The North British Review Vol. 1 (August, 1844) p. 285, Art. I.—Lettres écrites à un Provincial par Blaise Pascal, précédées d'un Eloge de Pascal, par M. Bordas Demoulin, Discours qui a remporté le Prix décerné par l'Académie Française, le 30 Juin 1842, et suivies d'un Essai sur les Provinciales et le style de Pascal. Par Francois de Neufchateau. Paris, 1843. See also "Life, Genius, and Scientific Discoveries of Pascal", The Provincial Letters of Blaise Pascal (1866) ed. O. W. Wight, pp. 15-63.
Hand suction pump
• When the engineers of Cosmo de Medicis wished to raise water higher than thirty-two feet by means of a sucking-pump, they found it impossible to take it higher than thirty-one feet. Galileo, the Italian sage, was applied to in vain for a solution of the difficulty. It had been the belief of all ages that the water followed the piston, from the horror which nature had of a vacuum, and Galileo improved the dogma by telling the engineers that this horror was not felt, or at least not shown, beyond heights of thirty one feet! At his desire, however, his disciple Toricelli investigated the subject. He found, that when the fluid raised was mercury, the horror of a vacuum did not extend beyond 30 inches, because the mercury would not rise to a greater height; and hence he concluded that a column of water 31 feet high, and one of mercury 30 inches, exerted the same pressure upon the same base, and that the antagonist force which counterbalanced them must in both cases be the same; and having learned from Galileo that the air was a heavy fluid, he concluded, and he published the conclusion in 1645, that the weight of the air was the cause of the rise of water to 31 feet and of mercury to 30 inches. Pascal repeated these experiments in 1646, at Rouen before more than 500 persons, among whom were five or six Jesuits of the College, and he obtained precisely the same results as Toricelli. The explanation of them, however, given by the Italian philosopher, and with which he was unacquainted, did not occur to him; and though he made many new experiments on a large scale with tubes of glass 50 feet long, they did not conduct him to any very satisfactory results. He concluded that the vacuum above the water and the mercury contained no portion of either of these fluids, or any other matter appreciable by the senses; that all bodies have a repugnance to separate from a state of continuity, and admit a vacuum between them; that this repugnance is not greater for a large vacuum than a small one; that its measure is a column of water 31 feet high, and that beyond this limit, a great or a small vacuum is formed above the water with the same facility, provided no foreign obstacle prevents it. These experiments and results were published by our author in 1647, under the title of Nouvelles Experiences touchant le Vuide; but no sooner had they appeared, than they experienced, from the Jesuits, and the followers of Aristotle, the most violent opposition.
Toricellian tube
• To these objections Pascal replied in two letters, addressed to [Stephen] Noel; but though he had no difficulty in overturning the contemptible reasoning of his antagonist, he found it necessary to appeal to new and more direct experiments. The explanation of Toricelli had been communicated to him a short time after the publication of his work; and assuming that the mercury in the Toricellian tube was suspended by the weight or pressure of the air, he drew the conclusion that the mercury would stand at different heights in the tube, if the column of air was more or less high. These differences, however, were too small to be observed under ordinary circumstances; and he therefore conceived the idea of observing the mercury at Clermont, a town in Auvergne... and on the top of the Puy de Dome, a mountain 500 toises above Clermont The state of his own health did not permit him to undertake a journey... but in a letter dated the 15th November 1647, he requested his brother-in-law, M. Perier, to go... M. Perier began the experiment by pouring into a vessel sixteen pounds of quicksilver which he had rectified... He then took two [straight] glass tubes, four feet long, of the same bore, and hermetically sealed at one end, and open at the other; and making the ordinary experiment of a vacuum with both, he found that the mercury stood in each of them at the same level... This experiment was repeated twice with the same result. One of these... was left under the care of M. Chastin... who undertook to observe and mark any changes... and the party... set out, with the other tube, for the summit of the Puy de Dome... Upon arriving there, they found that the mercury stood at the height of 23 inches, and 2 lines—no less than 3 inches and 1½ lines lower... The party was "struck with admiration and astonishment at this result;" and "so great was their surprise, that they resolved to repeat the experiment under various forms." During their descent of the mountain, they repeated the experiment at Lafond de l'Arbre, an intermediate station... and they found the mercury to stand at the height of 25 inches, a result with which the party was greatly pleased, as indicating the relation between the height of the mercury and the height of the station. Upon reaching the Minimes, they found that the mercury had not changed its height...
• Pascal's Treatise [De la Pesanteur de la Masse de l'Air] on the weight of the whole mass of air forms the basis of the modern science of Pneumatics. In order to prove that the mass of air presses by its weight on all the bodies which it surrounds, and also that it is elastic and compressible, he carried a balloon half filled with air to the top of the Puy de Dome. It gradually inflated itself as it ascended, and when it reached the summit it was quite full, and swollen, as if fresh air had been blown into it; or what is the same thing, it swelled in proportion as the weight of the column of air which pressed upon it was diminished. When again brought down, it became more and more flaccid, and when it reached the bottom, it resumed its original condition. ...[H]e shews that all the phenomena and effects hitherto ascribed to the horror of a vacuum arise from the weight of the mass of air; and after explaining the variable pressure of the atmosphere in different localities, and in its different states, and the rise of water in pumps, he calculates that the whole mass of air round our globe weighs 8,983,889,440,000,000,000 French pounds.
A Short Story of Thomas Newcomen (1904)[edit]
by Dwight Goddard, source
Newcomen's atmospheric
steam engine.
• Newcomen's invention was radically different from that of Savery or any other single person. Papin invented the cylinder and piston as a means for transforming energy into motion. At first he used the explosive force of gunpowder, and later the use of the expansive force of steam, to raise the piston, and then by removing the fire to cause it to fall again. He made no further use of this principle. Savery discovered that the sudden condensation of steam made a vacuum that he utilized to draw up water. His pumps were actually used to drain mines, but were never satisfactory. They had to be placed within the mine to be drained, not over forty feet from the bottom, and then could be used to force up water an additional height of perhaps 100 feet. Beyond this the process must be repeated. ...
Newcomen used Papin's cylinder and piston, and Savery's principle of the condensation of steam to produce a vacuum. But unlike Papin he used the expansive force of steam to do his work, and unlike Savery he used a cylinder and piston actuated by alternate expansion and condensation of steam to transform heat into mechanical motion.
• At first [Newcomen] made a double cylinder, using the space between for condensing water. This was not very satisfactory. The vacuum was secured very slowly and imperfectly. ...One day the engine made two or three motions quickly and powerfully. Newcomen immediately examined the cylinder and found a small hole, through which a small jet from the water that was on top of the piston to make it steam tight, was spurting into the cylinder. He... dispensed with the outer water jacket and injected the water for condensation, through a small pipe in the bottom of the cylinder. It... increased the speed of the engine from eight to fifteen strokes a minute, besides getting the advantage of a good vacuum.
The Book of Nothing (2009)[edit]
by John D. Barrow
• Preface
• Chapter nought "Nothingology—Flying to Nowhere"
• Chapter nought "Nothingology—Flying to Nowhere"
• Chapter nought "Nothingology—Flying to Nowhere"
See also[edit]
External links[edit]
Wikipedia has an article about: |
26acc899e567b174 | Infinite Dimensional Rough Dynamics
• Massimiliano GubinelliEmail author
Conference paper
Part of the Abel Symposia book series (ABEL, volume 13)
We review recent results about the analysis of controlled or stochastic differential systems via local expansions in the time variable. This point of view has its origin in Lyons’ theory of rough paths and has been vastly generalised in Hairer’s theory of regularity structures. Here our concern is to understand this local expansions when they feature genuinely infinite dimensional objects like distributions in the space variable. Our analysis starts reviewing the simple situation of linear controlled rough equations in finite dimensions, then we introduce unbounded operators in such linear equations by looking at linear rough transport equations. Loss of derivatives in the estimates requires the introduction of new ideas, specific to this infinite dimensional setting. Subsequently we discuss how the analysis can be extended to systems which are not intrinsically rough but for which local expansion allows to highlight other phenomena: in our case, regularisation by noise in linear transport. Finally we comment about other application of these ideas to fully-nonlinear conservations laws and other PDEs.
1. 1.
Bailleul, I., Gubinelli, M.: Unbounded rough drivers. Annales de la facultè des sciences Mathématiques de Toulouse 26(4), 795–830 (2017). MathSciNetCrossRefGoogle Scholar
2. 2.
Catellier, R., Gubinelli, M.: Averaging along irregular curves and regularisation of ODEs. Stoch. Process. Appl. 126(8), 2323–2366 (2016). MathSciNetCrossRefGoogle Scholar
3. 3.
Chen, K.T.: Iterated path integrals. Bull. Am. Math. Soc. 83(5), 831–879 (1977). MathSciNetCrossRefGoogle Scholar
4. 4.
Chouk, K., Gubinelli, M.: Nonlinear PDEs with modulated dispersion II: Korteweg–de Vries equation (2014). arXiv:1406.7675
5. 5.
Chouk, K., Gubinelli, M.: Rough sheets (2014). arXiv:1406.7748Google Scholar
6. 6.
Chouk, K., Gubinelli, M.: Nonlinear PDEs with modulated dispersion I: nonlinear Schrödinger equations. Commun. Partial Differ. Equ. 40(11), 2047–2081 (2015)CrossRefGoogle Scholar
7. 7.
Davie, A.M.: Differential equations driven by rough paths: an approach via discrete approximation. Appl. Math. Res. Express. AMRX (2) 40, Art. ID abm009 (2007)Google Scholar
8. 8.
Davie, A.M.: Uniqueness of solutions of stochastic differential equations. Int. Math. Res. Not. IMRN (24) 26, Art. ID rnm124 (2007).
9. 9.
Deya, A., Gubinelli, M., Hofmanová, M., Tindel, S.: A priori estimates for rough PDEs with application to rough conservation laws. arXiv:1604.00437 [math] (2016). arXiv:1604.00437Google Scholar
10. 10.
DiPerna, R.J., Lions, P.L.: Ordinary differential equations, transport theory and Sobolev spaces. Invent. Math. 98(3), 511–547 (1989). MathSciNetCrossRefGoogle Scholar
11. 11.
Flandoli, F., Gubinelli, M., Priola, E.: Well-posedness of the transport equation by stochastic perturbation. Invent. Math. 180(1), 1–53 (2010). MathSciNetCrossRefGoogle Scholar
12. 12.
Friz, P.K., Hairer, M.: A Course on Rough Paths: with an Introduction to Regularity Structures. Universitext. Springer, Cham (2014)CrossRefGoogle Scholar
13. 13.
Gubinelli, M.: Controlling rough paths. J. Funct. Anal. 216(1), 86–140 (2004). MathSciNetCrossRefGoogle Scholar
14. 14.
Gubinelli, M., Tindel, S., Torrecilla, I.: Controlled viscosity solutions of fully nonlinear rough PDEs (2014). arXiv:1403.2832Google Scholar
15. 15.
Hairer, M.: A theory of regularity structures. Invent. Math. 198(2), 269–504 (2014). MathSciNetCrossRefGoogle Scholar
16. 16.
Lyons, T.J.: Differential equations driven by rough signals. Rev. Mat. Iberoamericana 14(2), 215–310 (1998). MathSciNetCrossRefGoogle Scholar
Copyright information
© Springer Nature Switzerland AG 2018
Authors and Affiliations
1. 1.IAM and Hausdorff Center for MathematicsBonnGermany
Personalised recommendations |
9ca2c6fa71032fef | Center Manifold
From Scholarpedia
Jack Carr (2006), Scholarpedia, 1(12):1826. doi:10.4249/scholarpedia.1826 revision #126955 [link to/cite this article]
(Redirected from Center manifold theorem)
Jump to: navigation, search
Post-publication activity
Curator: Jack Carr
Figure 1: The centre manifold \(y=h(x)\) and stable manifold \(W^s\ .\)
One of the main methods of simplifying dynamical systems is to reduce the dimension of the system. Centre manifold theory is a rigorous mathematical technique that makes this reduction possible, at least near equilibria.
An Example
We first look at a simple example. Consider \[\tag{1} x' =ax^3 \,, \qquad y' =-y + y^2 \]
where \(a\) is a constant. Since the equations are uncoupled, we see that the stationary solution \( x=y =0\) of (1) is asymptotically stable if and only if \(a < 0\ .\) Suppose now that \[\tag{2} x' =ax^3 + xy - xy^2\,, \qquad y' =-y + bx^2 +x^2 y \]
Since the equations are coupled we cannot immediately decide if the stationary solution \( x=y =0\) of (2) is asymptotically stable. The key is an abstraction of the idea of uncoupled equations.
A curve \(y =h(x)\ ,\) defined for \(|x|\) small, is said to be an invariant manifold for the system \[\tag{3} x' =f(x,y)\,, \qquad y' = g(x,y) \]
if the solution of (3) with \(x(0) =x_0\ ,\) \(y(0) = h(x_0)\) lies on the curve \(y =h(x)\) as long as \(x(t)\) remains small. For the system (1), \(y=0\) is an invariant manifold. Note that in deciding upon the stability of the stationary solution of (1), the only important equation is \(x' = ax^3\ ,\) that is, we need only study a first order equation on a particular invariant manifold.
Center manifold theory tells us that (2) has an invariant manifold \(y =h(x) = \mbox{O}(x^2)\) for small \(x\ .\) Furthermore, the local behaviour of solutions of the two dimensional system (2) can be determined by studying the scalar equation \[\tag{4} u' = au^3 + uh(u) -uh^2(u) \]
The theory also tells us how to compute approximations to the invariant manifold \(y = h(x)\ .\) For (2) we have that \(h(x) = bx^2 + \mbox{O}(x^4)\) and using this information in (4) gives \[\tag{5} u' =(a+b)u^3 + \mbox{O}(u^5) \]
Hence the stationary solution of (2) is asymptotically stable if \(a+b < 0\) and unstable if \(a+b>0\ .\) If \(a+b = 0\) we need a better approximation to the invariant manifold in order to decide on the stability.
Centre Manifolds
Consider the system \[\tag{6} x' =Ax + f(x,y)\,, \qquad y' = By+g(x,y)\,, \qquad (x,y ) \in \R^n \times \R^m \]
where all the eigenvalues of the matrix \(A\) have zero real parts and all the eigenvalues of the matrix \(B\) have negative real parts. The functions \(f\) and \(g\) are sufficiently smooth and \[ f(0,0) =0\,, \qquad Df(0,0) =0\,, \qquad g(0,0) =0\,, \qquad Dg(0,0) = 0 \] where \(Df\) is the Jacobian matrix of \(f\ .\)
If \(f\) and \(g\) are identically zero then (6) has the two obvious invariant manifolds \(x=0\) and \(y=0\ .\) The invariant manifold \(x=0\) is called the stable manifold, and on the stable manifold all solutions decay to zero exponentially fast. The invariant manifold \(y=0\) is called the centre manifold. In general, an invariant manifold \(y = h(x)\) for (6) defined for small \(|x|\) with \(h(0)=0\) and \(Dh(0)=0\) is called a centre manifold. In more physical terms, the dynamics of y follows the dynamics of x and one may say that x enslaves the variable y. This interpretation has been called slaving principle.
Main Results
The general theory states that there exists a centre manifold \(y =h(x)\) for (6) and that the equation on the centre manifold \[\tag{7} u' =Au + f(x,h(u))\,, \qquad u \in R^n \]
determines the dynamics of (6) near \((x, y) =(0,0)\ .\) In particular, if the stationary solution \(u=0\) of (7) is stable, we can represent small solutions of (6) as \(t \rightarrow \infty\) by \[ x(t) =u(t) + \mbox{O}(e^{-\gamma t} )\,, \qquad y(t) =h(u(t)) + \mbox{O}(e^{-\gamma t}) \] where \(\gamma > 0\) is a constant.
To use the above theory, we need to have enough information about the centre manifold \(y = h(x)\) in order to determine the local dynamics of (7). If we substitute \(y(t) = h(x(t))\) into the second equation in (6) we obtain \[\tag{8} N(h(x)) =h'(x)\left[ Ax +f(x,h(x)) \right] - Bh(x) -g(x,h(x)) = 0 \]
The general theory tells us that the solution \(h\) of (8) can be approximated by a polynomial in \(x\ ,\) that is, if \(N(\phi(x)) = \mbox{O}(|x|^q)\) as \(x \rightarrow 0\) then \(h(x) =\phi (x) + \mbox{O}(|x|^q)\ .\)
There is also an \(m\) dimensional invariant manifold \(W^s\) tangential to the y-axis called the stable manifold. On the stable manifold all solutions decay to zero exponentially fast. Figure 1 illustrates the local dynamics for equation (6). The details of the flow on the centre manifold \(y = h(x)\) depend on the higher order terms in equation (7) and we cannot assign directions to the flow without further information.
We have assumed that all of the eigenvalues of the matrix B in (6) have negative real parts. The theory can be extended to the case in which the matrix B has in addition some eigenvalues with positive real parts. In this case the stationary solution \(x=0, y=0\) of (6) is unstable due to the unstable eigenvalues. There exists a centre manifold for (6) which captures the behaviour of small bounded solutions. In particular, this gives a method of studying all sufficiently small equilibria, periodic orbits and heteroclinic orbits.
Local Bifurcations
Centre manifold reduction is central to the development of bifurcation theory. We illustrate this by means of a simple example. Consider \[\tag{9} x' =\epsilon x -x^3 +xy\,, \qquad y' =-y + y^2 -x^2 \]
where \(\epsilon\) is a small scalar parameter. The goal is to study small solutions of (9). The linearised problem about the zero equilibrium has eigenvalues \(-1\) and \(\epsilon\) so the theory does not directly apply. We can write the equations in the equivalent form \[\tag{10} x' =\epsilon x -x^3 +xy\,, \qquad y' = -y + y^2 -x^2 \,, \qquad \epsilon' = 0 \ .\]
When considered as an equation on \(\R^3\) the \(\epsilon x\) term in (10) is nonlinear and the system has an equilibrium at \((x,y,\epsilon) = (0,0,0)\ .\) The linearisation about this equilibrium has eigenvalues \(-1, 0 ,0\ ,\) that is, it has two zero eigenvalues and one negative eigenvalue. . The theory now applies so that the extended system (10) has a two dimensional centre manifold \(y =h(x,\epsilon)\) that can be approximated by a polynomial in \(x\) and \(\epsilon\ .\) The equation on the centre manifold is two dimensional and may be written in terms of the scalar variables \(u\) and \(\epsilon\) as \[ u' =\epsilon u - 2u^3 + \mbox{higher order terms} \,, \qquad \epsilon' = 0 \] and the local dynamics of (10) can be deduced from this equation.
Notes and Further Reading
The ideas for centre manifolds in finite dimensions have been around for a long time and have been developed by Carr (1981), Guckenheimer and Holmes (1983), Kelly (1967), Vanderbauwhede (1989) and others. For recent developments in the approximation of centre manifolds see Jolly and Rosa (2005). Pages 1-5 of the book by Li and Wiggins (1997) give an extensive list of the applications of centre manifold theory to infinite dimensional problems. Mielke (1996) has developed centre manifold theory for elliptic partial differential equations and has applied the theory to elasticity and hydrodynamical problems. Applications to phase transitions in biological, chemical and physical systems have been investigated by Haken (2004).
In addition, it is interesting to note that there is a stochastic extension of the center manifold theorem, which has been introduced by Boxler (1989). In this case, for instance the center and stable manifolds may fluctuate randomly.
J. Carr (1981), Applications of Centre Manifold Theory, Springer-Verlag.
J. Guckenheimer and P. Holmes (1983), Nonlinear Oscillations, Dynamical systems and Bifurcations of Vector Fields. Springer-Verlag.
M. S. Jolly and R. Rosa (2005), Computation of non-smooth local centre manifolds, IMA Journal of Numerical Analysis , 25, no. 4, 698-725.
A. Kelly (1967), The stable, center-stable, center, center-unstable and unstable manifolds. J. Diff. Eqns, 3, 546-570.
Li and S. Wiggins (1997), Invariant manifolds and fibrations for perturbed nonlinear Schrödinger equations. Springer-Verlag.
A. Mielke (1996), Dynamics of nonlinear waves in dissipative systems: reduction, bifurcation and stability. In Pitman Research Notes in Mathematics Series, 352. Longman.
A. Vanderbauwhede (1989). Center Manifolds, Normal Forms and Elementary Bifurcations, In Dynamics Reported, Vol. 2. Wiley.
H. Haken (2004), Synergetics: Introduction and Advanced topics, Springer Berlin
P. Boxler (1989), A stochastic version of center manifold theory, Probability Theory and Related Fields, 83(4), 509-545
Internal references
External links
See Also
Attractor, Bifurcations, Normal Hyperbolicity, Stability, Synergetics,
Personal tools
Focal areas |
cafea4114d82ffee | My watch list
RNA Microchips
A new chapter in ribonucleic acid synthesis
blickpixel,, CC0
Semiconductor technology and synthesis
First, the chemists adapted the photolithographic fabrication technology from the semiconductor chip industry, commonly used for integrated circuit manufacture, for the chemical synthesis of RNA. Biological photolithography makes it possible to produce RNA chips with a density of up to one million sequences per square centimeter. Instead of using far ultraviolet light, which is used in the production of computer chips for silicon etching and doping, the researchers use UV-A light. “Shortwave ultraviolet light has a very destructive effect on RNA, so we are limited to UV-A light in the synthesis” explains Mark Somoza, of the Institute of Inorganic Chemistry.
Facts, background information, dossiers
More about Universität Wien
• News
When sulfur disappears without trace
Many natural products and drugs feature a so-called dicarbonyl motif – in certain cases however their preparation poses a challange to organic chemists. In their most recent work, Nuno Maulide and his coworkers from the University of Vienna present a new route for these molecules. They use ... more
The Schrödinger equation as a Quantum clock
Materials with controllable quantum mechanical properties are of great importance for the electronics and quantum computers of the future. However, finding or designing realistic materials that actually have these effects is a big challenge. Now, an international theory and computational te ... more
Artificial intelligence for obtaining chemical fingerprints
|
35f045d5f6f1043e | Q&A: Anatole von Lilienfeld discusses novel approach that taps supercomputers to develop new materials
ALCF staff
Facebook Twitter LinkedIn Google E-mail Printer-friendly version
An international research team, led by ALCF computational chemist Anatole von Lilienfeld, is developing an algorithm that combines quantum chemistry with machine learning (artificial intelligence) to enable atomistic simulations that predict the properties of new materials with unprecedented speed.
From innovations in medicine to novel materials for next-generation batteries, this approach could greatly accelerate the pace of materials discovery, with high-performance computing tools offering important assistance, or even full-blown alternatives, to time-consuming laboratory experiments.
This cutting-edge work is an example of how the ALCF aims to extend the frontiers of science by solving key problems that require innovative approaches and the largest-scale computing systems. Von Lilienfeld and his colleagues have made significant advances by running simulations on Intrepid, the ALCF's Blue Gene/P supercomputer, through a Director's Discretionary allocation (an in-house discretionary program that awards start-up time to researchers with a demonstrated need for leadership-class computing resources).
Here, von Lilienfeld sheds some light on where the research stands now and where it’s headed in the future.
This tool could result in a paradigm shift for materials design and discovery. Can you explain how this research could change the field?
We apply statistical learning techniques to the problem of inferring solutions of the electronic Schrödinger equation, one of the most important equations in atomistic simulation using quantum chemistry. Conventionally, this differential equation is solved with what we call the self-consistent field cycle, an algorithmic method that iteratively approaches the electronic wavefunction in which the potential energy of the system is minimal. The computational effort for this task is significant, and has historically resulted in a strong presence of quantum chemists at high-performance computing centers around the world. While this deductive approach is perfectly valid, it is frustrating that one has to restart the iterations again every time the chemical composition changes. Our approach attempts to infer the solution for a newly composed material instead, provided that a sufficiently large number of examples have been used for training.
What kinds of applications stand to benefit from this approach?
This approach is promising for any attempts to virtually design novel materials. In particular, when quantum mechanics and a substantial amount of screening is required.
Why are high-performance computers, like the supercomputers you’re using at the ALCF, required for this work?
Machine-learning techniques require very large amounts of data in order to infer solutions for interpolating scenarios with satisfying accuracy. Many thousands, if not more, data points are required. At such scale, it rapidly becomes unfeasible to obtain the data from experiment, and instead we have to rely on supercomputers to simulate such large numbers of materials.
Compared to existing research methods, how much faster could materials be developed using this algorithm?
For small organic molecules, we have observed speedups of several orders of magnitude. As one example, using density-functional field theory (DFT) to calculate a molecule’s atomization energy can take many minutes with one CPU. Estimating that same property with the machine-learning model would result in a prediction within milliseconds. For a fair comparison, however, the CPU time invested in generating the database for training has to be taken into account.
The electronic Schrödinger equation is one of the biggest obstacles to this computational approach. How were you able to work around the equation?
The Schrödinger equation’s solutions are complex, and it is not trivial to gain qualitative insights that go beyond its solution for the energy of a given system, and that enable us to transfer and reapply the insights to novel compounds. As such, machine learning offers a systematic and rigorous way to properly integrate the results from all the previous solutions into a single, closed mathematical expression.
Mathematically speaking, our machine learning efforts rely on the paradigm of supervised learning (i.e., given an equation and a large set of corresponding dependent and independent variables one can interpolate the dependent variables for new sets of independent variables). One of the challenges consisted of how to represent the independent variables that make up chemical compounds in Schrödinger's equation. We came up with a representation, dubbed the “Coulomb” matrix, to express the nuclear electrostatic repulsion between all the atoms in the system. It turns out that this representation is particularly well suited for the accurate energy interpolation towards novel chemical compounds that were not part of the training data set, so-called “out-of-sample” compounds.
In 2011, you organized and chaired a program at the Institute for Pure and Applied Mathematics (IPAM) at UCLA to advance research in the chemical compound space by overcoming bottlenecks. What motivated this effort?
From the atomistic simulation perspective, chemical compound space (CCS) is the set of all possible combinations of atom types and positions, each making up a “system” within Schrödinger's equation. A pharmaceutical chemist has a very different idea of CCS, basically consisting of all the possible graphs, vertices, and edges representing atoms and bonds, respectively. The materials scientist, by contrast, will typically define CCS in terms of crystal symmetries of periodically repeating unit cells that contain varying stoichiometries. Yet, one of the more important goals these scientific disciplines and many others pursue consists of finding novel compounds by navigating CCS. The applied mathematics program that takes place every year at the National Science Foundation-funded IPAM has had a remarkable record for bringing together very diverse communities in order to speak to each other as well as to applied mathematicians. IPAM's scientific board recognized the merits in realizing the proposed long-term program on “Navigating Chemical Compound Space for Bio and Materials Design,” and they provided invaluable assistance to make it happen. The program consisted of six one-week workshops with topics including a tutorial, mathematical optimization, physical frameworks, materials design, biodesign, and a culminating workshop at the UCLA conference center. The board was also crucial for establishing the links to many of the mathematicians who were interested in the topic, including my new collaborators from the Technical University in Berlin with whom I developed the machine-learning algorithms.
Can you explain how you demonstrated proof of principle with the algorithm?
First, we had to generate the data. Using supercomputers such as Intrepid, the ALCF’s Blue Gene/P IBM machine, we solved well-known, and sufficiently accurate, approximations to Schrödinger's equation to obtain energies and other properties (dependent variables) for roughly 7,000 organic molecules (independent variables), which were proposed in an enumeration study by the laboratory of Prof. Jean-Louis Reymond at the University of Berne, Switzerland. Subsequently, we cast all the molecules in their Coulomb-matrix form, and we randomly divided the data into two sets: the training set and the testing set. We fitted the training set data with what is called a kernel-ridge-regression method using the Coulomb-matrices as variables. We used the resulting model to predict the properties of all the molecules in the testing set. The comparison of the predicted properties to the properties obtained from solving Schrödinger's equation was very good. We could furthermore show that as one increases the training set, the agreement of the test set results with the numerical reference results improve. Hence, we concluded that this is a practical approach to infer solutions of Schrödinger's equation for novel chemicals.
Some of your peers have also demonstrated proof of principle with other applications. Can you tell us what else is currently being done in this research space?
The idea of applying supervised learning to quantum mechanics has also been applied to learn energies based on electronic densities, rather than based on the Coulomb-matrix. This has great potential to arrive at much more accurate approximations to Schrödinger's equation than the approximation we used in our study, for example. A more conventional application, with the first papers appearing in the 1990s, consists also of using machine learning to infer energies for new geometries. This can be very helpful to speed up molecular dynamics calculations. These are used when all the atomic degrees of freedom of a given system have to be sampled for many steps so that one can apply the laws of statistical mechanics (i.e., allowing one to compute macroscopic properties of the material, such as thermal stability or phase diagrams). Other recent applications of machine learning to atomistic simulation rely on unsupervised learning of critical geometrical features, such as transition states of chemical reactions, or to detect relevant atomistic motion in biomolecules with many thousands of atoms.
Such interdisciplinary work as ours would not have been possible without my new collaborators from the machine learning group of Prof. Klaus-Robert Müller at the TU Berlin. When we first met, I challenged them on how to learn across different molecules to predict properties. Fundamental properties, such as the energy, can be calculated for any molecule using Schrödinger's equation. Consequently, it boiled down to learning this equation. At IPAM, we have spent many exhausting and late night hours trying to clarify our mutual ideas stemming from different backgrounds, and it was only after learning from many failed attempts that we managed to implement a successful model. Later this year, the first author of our study, Matthias Rupp who was a postdoc with Prof. Müller when we met at IPAM, will join my new group in the chemistry department of the University of Basel where we plan to build on these studies using the ALCF's resources remotely.
What are the next steps? How far away are we from seeing this become a widely used tool?
We are still at a very early stage where we need to better understand all the limitations and potential of this method. Mira, the ALCF's new Blue Gene/Q supercomputer, will certainly play a crucial role in generating the data sets with which we hope to perform the necessary studies. In two to three years, we will be able to give a more qualified verdict if this approach truly fulfills all the expectations. |
3934c1244d3731ab | QUANTUM THEORY (QT). Basic elements
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
This is a continuation from the post WHY QT FOR AAI? explaining the motivation why to look to quantum theory (QT) in the case of the AAI paradigm. After approaching QT from a philosophy of science perspective (see the post QUANTUM THEORY (QT). BASIC PROPERTIES) giving a ‘birds view’ of the relationship between a QT and the presupposed ‘real world’ and digging a bit into the first person view inside an observer we are here interested in the formal machinery of QT. For this we follow Grifftiths in his chapter 1.
1. The starting point of a quantum theory QT are ‘phenomena‘, which “lack any description in classical physics”, a kind of things “which human beings cannot observe directly”. To measure such phenomena one needs highly sophisticated machines, which poses the problem, that the interpretation of possible ‘measurement data’ in terms of a quantum theory depends highly on the understanding of the working of the used measurement apparatus. (cf. p.8)
2. This problem is well known in philosophy of science: (i) one wants to built a new theory T. (ii) For this theory one needs appropriate measurement data MD. (iii) The measurement as such needs a well defined procedure including different kinds of pre-defined objects and artifacts. The description of the procedure including the artifacts (which can be machines) is a theory of its own called measurement theory T*. (iv) Thus one needs a theory T* to enable a new theory T.
3. In the case of QT one has the special case that QT itself has to be part of the measurement theory T*, i.e. QT subset T*. But, as Griffiths points out, the measurement problem in QT is even deeper; it is not only the conceptual dependency of QT from its measurement theory T*, but in the case of QT does the measurement apparatus directly interact with the target objects of QT because the measurement apparatus is itself part of the atomic and sub-atomic world which is the target. (cf. p.8) This has led to include the measurement as ‘stochastic time development’ explicitly into the QT. (cf. p.8) In his book Griffiths follows the strategy to deal with the ‘collapse of the wave function’ within the theoretical level, because it does not take place “in the experimental physicist’s laboratory”. (cf. p.9)
4. As a consequence of these considerations Griffiths develops the fundamental principles in the chapters 2-16 without making any reference to measurement.
1. Besides the special problem of measurement in quantum mechanics there is the general problem of measurement for every kind of empirical discipline which requires a perception of the real world guided by a scientific bias called ‘scientific knowledge’! Without a theoretical pre-knowledge there is no scientific observation possible. A scientific observation needs already a pre-theory T* defining the measurement procedure as well as the pre-defined standard object as well as – eventually — an ‘appropriate’ measurement device. Furthermore, to be able to talk about some measurement data as ‘data related to an object of QT’ one needs additionally a sufficient ‘pre-knowledge’ of such an object which enables the observer to decide whether the measured data are to be classified as ‘related to the object of QT. The most convenient way to enable this is to have already a proposal for a QT as the ‘knowledge guide’ how one ‘should look’ to the measured data.
1. Related to the phenomena of quantum mechanics the phenomena are in QT according to Griffiths understood as ‘particles‘ whose ‘state‘ is given by a ‘complex-valued wave function ψ(x)‘, and the collection of all possible wave functions is assumed to be a ‘complex linear vector space‘ with an ‘inner product’, known as a ‘Hilbert space‘. “Two wave functions φ(x) and ψ(x) represent ‘distinct physical states’ … if and only if they are ‘orthogonal’ in the sense that their ‘inner product is zero’. Otherwise φ(x) and ψ(x) represent incompatible states of the quantum system …” .(p.2)
2. “A quantum property … corresponds to a subspace of the quantum Hilbert space or the projector onto this subspace.” (p.2)
3. A sample space of mutually-exclusive possibilities is a decomposition of the identity as a sum of mutually commuting projectors. One and only one of these projectors can be a correct description of a quantum system at a given time.cf. p.3)
4. Quantum sample spaces can be mutually incompatible. (cf. p.3)
5. “In … quantum mechanics [a physical variable] is represented by a Hermitian operator.… a real-valued function defined on a particular sample space, or decomposition of the identity … a quantum system can be said to have a value … of a physical variable represented by the operator F if and only if the quantum wave function is in an eigenstate of F … . Two physical variables whose operators do not commute correspond to incompatible sample spaces… “.(cf. p.3)
6. “Both classical and quantum mechanics have dynamical laws which enable one to say something about the future (or past) state of a physical system if its state is known at a particular time. … the quantum … dynamical law … is the (time-dependent) Schrödinger equation. Given some wave function ψ_0 at a time t_0 , integration of this equation leads to a unique wave function ψ_t at any other time t. At two times t and t’ these uniquely defined wave functions are related by a … time development operator T(t’ , t) on the Hilbert space. Consequently we say that integrating the Schrödinger equation leads to unitary time development.” (p.3)
7. “Quantum mechanics also allows for a stochastic or probabilistic time development … . In order to describe this in a systematic way, one needs the concept of a quantum history … a sequence of quantum events (wave functions or sub-spaces of the Hilbert space) at successive times. A collection of mutually … exclusive histories forms a sample space or family of histories, where each history is associated with a projector on a history Hilbert space. The successive events of a history are, in general, not related to one another through the Schrödinger equation. However, the Schrödinger equation, or … the time development operators T(t’ , t), can be used to assign probabilities to the different histories belonging to a particular family.” (p.3f)
1. “The wave functions for even such a simple system as a quantum particle in one dimension form an infinite-dimensional Hilbert space … [but] one does not have to learn functional analysis in order to understand the basic principles of quantum theory. The majority of the illustrations used in Chs. 2–16 are toy models with a finite-dimensional Hilbert space to which the usual rules of linear algebra apply without any qualification, and for these models there are no mathematical subtleties to add to the conceptual difficulties of quantum theory … Nevertheless, they provide many useful insights into general quantum principles.”. (p.4f)
1. Griffiths (2003) makes considerable use of toy models with a simple discretized time dependence … To obtain … unitary time development, one only needs to solve a simple difference equation, and this can be done in closed form on the back of an envelope. (cf. p.5f)
2. Probability theory plays an important role in discussions of the time development of quantum systems. … when using toy models the simplest version of probability theory, based on a finite discrete sample space, is perfectly adequate.” (p.6)
3. “The basic concepts of probability theory are the same in quantum mechanics as in other branches of physics; one does not need a new “quantum probability”. What distinguishes quantum from classical physics is the issue of choosing a suitable sample space with its associated event algebra. … in any single quantum sample space the ordinary rules for probabilistic reasoning are valid. ” (p.6)
1. The important difference compared to classical mechanics is the fact that “an initial quantum state does not single out a particular framework, or sample space of stochastic histories, much less determine which history in the framework will actually occur.” (p.7) There are multiple incompatible frameworks possible and to use the ordinary rules of propositional logic presupposes to apply these to a single framework. Therefore it is important to understand how to choose an appropriate framework.(cf. p.7)
These are the basic ingredients which Griffiths mentions in chapter 1 of his book 2013. In the following these ingredients have to be understood so far, that is becomes clear how to relate the idea of a possible history of states (cf. chapters 8ff) where the future of a successor state in a sequence of timely separated states is described by some probability.
|
4c65d4e507d05bb7 | Lasers and optoelectronics
(content prepared by Yanne Chembo)
Edge-emitter semiconductor laser dynamics
CMSLs Semiconductor lasers are intrinsically nonlinear components, and they display a very rich variety of complex behaviors. In particular, chaos may arise when the dynamical dimensionalty of the laser is increased. Today, several techniques are commonly used to induce chaos in semiconductor lasers, even though they can be gathered into two principal groups, namely parameter modulation and external feedback. Following the mainstream trend of research in chaos theory, great attention has been been paid to the collective dynamics of coupled semiconductor lasers in their chaotic regime. Along that line, the synchronization of such chaotic lasers became a focus of strong interest, and currently, the determination of the necessary and/or sufficient conditions for their synchronization is still a difficult challenge, which has turned to be crucial when chaotic semiconductor lasers became potentially eligible for hardware cryptography.
The figure displays the bifurcation diagram of a current-modulated semiconductor laser when the mean pumping current is increased (after ref. [1]). Such lasers have also been proven to display cluster synchronization [2]. On the other hand, we had also investigated the dynamics of external-cavity semiconductor lasers using the Lang-Kobayashi model, as well as their synchronization properties [3]. This research is led in collaboration with the University of Yaoundé I, Cameroon.
VCSELs dynamics
VCSELs Vertical-Cavity Surface Emitting Lasers (VCSELs) offer numerous advantages comparatively to their edge-emitter counterpart, To name just a few, VCSELs are intrinsically single-longitudinal mode lasers, and they have a significantly lower threshold current, as well as a lower power consumption. They are very cost effective because they can simultaneously be fabricated in a planar structure, and then tested "on wafer"; this planar structure also allows for easy integration in two-dimensional arrays. The circular cross-section of VCSELs produces lowdivergence beams (thus limiting the need of corrective optics), and enables a highly efficient laser-fiber coupling. VCSELs are nowadays particularly spread in optical fiber data transmission (mostly in gigabit-ethernet networks), free-space optical communications, absorption spectroscopy, laser printers, sensors, pointers and trackers.
A difficult challenge in most of VCSELs applications is the design high-power, single-mode, and single-polarization output beams. This is for example a critical issue in optical communication networks with ultra-dense wavelength-division multiplexing (UD-WDM), where the spectral spacing between adjacent channels can be as low as 25 GHz. The control of the emission properties of VCSELs can be achieved using polarization- and frequency-selective feedback. The figure (after ref. [4]) shows how the numerical simulations based on a modal expansion model enable to recover satisfyingly the main features of the emitted radiation: polarization, wavelength, and spatial orientation. The experimental set-up has been developed at the Darmstadt University of Techology, Germany.
Optoelectronic systems
pulseGen Optoelectronic oscillators are becoming increasingly important hybrid systems in communication technology. They typically consist in a semiconductor laser feeding a Mach-Zehnder modulator, whose output is delayed in a fiber delay line, detected with a photo-detector, amplified, filtered and finally fed back to the radio-frequency input of the modulator. Typically, this same architecture can be declined into two different technologies, depending on the bandwidth of the filter and the length of the fiber delay line inserted into path of the feedback loop. When the filter bandwidth is large and the fiber delay line is short, the system can display wideband hyperchaos; on the other hand, when the filter is narrowband and the delay line is long, the system outputs a single-mode signal. Other configurations are also possible, thereby enabling curious dynamical features such as narrowband hyperchaos.
The figure (after ref. [5]) displays the experimental set up of an optoelecronic pulse generator, that can be modelized with an hybrid set of equations, namely a delay-differential equation and a nonlinear Schrödinger equation. Activities in this area include the nonlinear and stochastic dynamics of wideband ([6], [7]) and and narrowband optoelectronic oscillators ([8], [9], [10]). |
c11bc90ad00d2754 | « · »
Section 10.2: The Quantum-mechanical Infinite Square Well
Please wait for the animation to completely load.
In the infinite square well potential, a particle is confined to a box of length L by two infinitely high potential energy barriers:
V = ∞ x ≤ 0 , V = 0 0 < x < L , V = ∞ xL .
We begin with the time-independent Schrödinger equation in one dimension for 0 < x < L:
The solution to this differential equation is a combination of sines and cosines (or complex exponentials; here, because of the boundary conditions, we choose sines and cosines). The general solution is
ψ(x) = Asin(kx) + Bcos(kx) ,
where k2 = 2mE/ħ2.
We still need to satisfy the boundary conditions, and thereby determine A, B, and k (and therefore E). For the wall on the left we need ψ(x) = 0, for x ≤ 0 and for the wall on the right we need ψ(x) = 0 for xL, which means that
ψ(0) = 0 and ψ(L) = 0 .
The cosine part of the general solution does not vanish at the x = 0 boundary since, cos(0) ≠ 0, and therefore we must have B = 0. We are left with ψ(x) = Asin(kx) and determining both A and k. This situation is shown in the animation. The boundary condition at x = 0 is already solved and you can vary the energy to see the effect on the energy eigenfunction (this is the shooting method). In the animation, ħ = 2m = 1. Restart. For what values of the energy (and therefore k) is the boundary condition at x = L satisfied?
At the boundary x = L we must have a k such that ψ(L) = Asin(kL) = 0. Satisfying this boundary condition can be accomplished by requiring that kL = nπ, where n = 0, ±1, ±2, ±3,…. We can eliminate the negative values of n as these energy eigenfunctions are different from the positive n values by just an overall phase factor of −1. The n = 0 value must be considered more carefully. If we allow n = 0, this yields k = 0 and amounts to a zero-curvature solution to the time-independent Schrödinger equation. A zero-curvature solution is of the form Ax + B and cannot satisfy the boundary conditions and be non-zero in the well. Therefore, k = 0 is not a valid possibility2 and we have that:
ψn(x) = Asin(nπx/L) for 0 < x < L with n = 1, 2, 3,… , (10.2)
and zero otherwise. Since k = (2mE/ħ2)1/2, the energy becomes:
En = n2π2ħ2/2mL2. (10.3)
But what about A? The energy eigenfunction must be normalized to satisfy Born's probabilistic interpretation, so
∫ ψn*(xn(x) dx = 1 . [integral from −∞ to +∞] (10.7)
Since ψn(x) is a real function and most of the spatial integral vanishes because the energy eigenfunction is zero everywhere except between 0 and L, we are simply left with calculating
A2 ∫ sin2(nπx/L) dx → A2 ∫ sin2(nπx/L) dx = A2L/2 = 1 , [integral from −∞ to +∞] (10.8)
which tells us that A = (2/L)1/2. Therefore the energy eigenfunction
ψn(x) = (2/L)1/2sin(nπx/L) for 0 < x < L (10.4)
satisfies the normalization condition. We also can consider the integral
∫ ψm*(x)ψn(x) dx = Ln cos(nπ)sin(mπ)/((m2 n2)π) + Lmcos(mπ)sin(nπ)/((n2 m2)π) = 0 [integral from 0 to L]
for mn. Therefore we can represent these two equations together as
∫ψm*(xn(x) dx = δmn [integral from 0 to L]
where δmn is the Kronecker delta which is defined such that δnn = 1 and δm ≠ n = 0. Hence the solutions to the infinite square well are orthogonal and normalized, or orthonormal.
Please wait for the animation to completely load.
In the second animation, the first 10 normalized energy eigenfunctions are shown for a box with L = 1, along with the energy spectrum. In the animation ħ = 2m = 1. You can click-drag in the energy spectrum on the left to change the energy state. As you do so, the displayed energy turns from green to red.
Note that these energy eigenfunctions, ψn(x), are only non-zero in the spatial region 0 < x < L and are zero everywhere else.3 These energy eigenfunctions are not so simple after all. In fact, these energy eigenfunctions have a kink (a discontinuous first derivative) at x = 0 and x = L. Normally this is not acceptable for a energy eigenfunction, but here the potential energy function for the infinite square well is so badly behaved, these energy eigenfunctions are actually acceptable.
We can now also calculate expectation values of position and momentum. We find that: <x> = L/2 and <p> = 0, as expected. We also find that: <x2> = L2 (1/3 − 1/(2n2π2)), <p2> = n2π2ħ2/L2, and hence <H> = n2π2ħ2/(2mL2), again as expected.
2Such an argument is also made by M. A. Morrison in Understanding Quantum Physics: A User's Manual. The correct derivation of why the k = 0 case cannot exist in the infinite square well was originally stated by M. Bowen and J. Coster, "Infinite Square Well: A Common Mistake," Am. J. Phys. 49, 80-81 (1980) with follow-up discussion in R. C. Sapp, "Ground State of the Particle in a Box," Am. J. Phys. 50, 1152-1153 (1982) and L. Yinji and H. Xianhuai, "A Particle Ground State in the Infinite Square Well," Am. J. Phys. 54, 738 (1986).
3These energy eigenfunctions can also be written as ψn(x) = (2/L)1/2 sin(nπx/L) Θ(x) Θ(L − x). This representation uses two Heaviside step functions, Θ(ξ), to explicitly show the region in which the energy eigenfunction is valid. This can be accomplished because the step function, Θ(ξ), is zero for ξ < 0 and is 1 for ξ > 0.
The OSP Network:
Open Source Physics - Tracker - EJS Modeling
Physlet Physics
Physlet Quantum Physics |
b181953e41fd89f5 | Consciousness Studies/Measurement In Quantum Physics And The Preferred Basis Problem
< Consciousness Studies
The Measurement ProblemEdit
In quantum physics the probability of an event is deduced by taking the square of the amplitude for an event to happen. The term "amplitude for an event" arises because of the way that the Schrödinger equation is derived using the mathematics of ordinary, classical waves where the amplitude over a small area is related to the number of photons hitting the area. In the case of light, the probability of a photon hitting that area will be related to the ratio of the number of photons hitting the area divided by the total number of photons released. The number of photons hitting an area per second is the intensity or amplitude of the light on the area, hence the probability of finding a photon is related to "amplitude".
However, the Schrödinger equation is not a classical wave equation. It does not determine events, it simply tells us the probability of an event. In fact the Schrödinger equation in itself does not tell us that an event occurs at all, it is only when a measurement is made that an event occurs. The measurement is said to cause state vector reduction. This role of measurement in quantum theory is known as the measurement problem. The measurement problem asks how a definite event can arise out of a theory that only predicts a continuous probability for events.
Two broad classes of theory have been advanced to explain the measurement problem. In the first it is proposed that observation produces a sudden change in the quantum system so that a particle becomes localised or has a definite momentum. This type of explanation is known as collapse of the wavefunction. In the second it is proposed that the probabilistic Schrödinger equation is always correct and that, for some reason, the observer only observes one particular outcome for an event. This type of explanation is known as the relative state interpretation. In the past thirty years relative state interpretations, especially Everett's relative state interpretation have become favoured amongst quantum physicists.
The quantum probability problemEdit
The measurement problem is particularly problematical when a single particle is considered. Quantum theory differs from classical theory because it is found that a single photon seems to be able to interfere with itself. If there are many photons then probabilities can be expressed in terms of the ratio of the number hitting a particular place to the total number released but if there is only one photon then this does not make sense. When only one photon is released from a light source quantum theory still gives us a probability for a photon to hit a particular area but what does this mean at any instant if there is indeed only one photon?
If the Everettian interpretation of quantum mechanics is invoked then it might seem that the probability of the photon hitting an area in your particular universe is related to the occurrences of the photon in all the other universes. But in the Everrettian interpretation even the improbable universes occur. This leads to a problem known as the quantum probability problem:
If the universe splits after a measurement, with every possible
measurement outcome realised in some branch, then how can it make
sense to talk about the probabilities of each outcome? Each
outcome occurs.
This means that if our phenomenal consciousness is a set of events then there would be endless copies of these sets of events, almost all of which are almost entirely improbable to an observer outside the brain but all of which exist according to an Everrettian Interpretation. Which set is you? Why should 'you' conform to what happens in the environment around you?
The preferred basis problemEdit
It could be held that you assess probabilities in terms of the branch of the universe in which you find yourself but then why do you find yourself in a particular branch? Decoherence Theory is one approach to these questions. In decoherence theory the environment is a complex form that can only interact with particles in particular ways. As a result quantum phenomena are rapidly smoothed out in a series of micro-measurements so that the macro-scale universe appears quasi-classical. The form of the environment is known as the preferred basis for quantum decoherence. This then leads to the preferred basis problem in which it is asked how the environment occurs or whether the state of the environment depends on any other system.
According to most forms of decoherence theory 'you' are a part of the environment and hence determined by the preferred basis. From the viewpoint of phenomenal consciousness this does not seem unreasonable because it has always been understood that the conscious observer does not observe things as quantum superpositions. The conscious observation is a classical observation.
However, the arguments that are used to derive this idea of the classical, conscious observer contain dubious assumptions that may be hindering the progress of quantum physics. The assumption that the conscious observer is simply an information system is particularly dubious:
"Here we are using aware in a down - to - earth sense: Quite simply, observers know what they know. Their information processing machinery (that must underlie higher functions of the mind such as "consciousness") can readily consult the content of their memory. (Zurek 2003).
This assumption is the same as assuming that the conscious observer is a set of measurements rather than an observation. It makes the rest of Zurek's argument about decoherence and the observer into a tautology - given that observations are measurements then observations will be like measurements. However, conscious observation is not simply a change of state in a neuron, a "measurement", it is the entire manifold of conscious experience.
In his 2003 review of this topic Zurek makes clear an important feature of information theory when he states that:
There is no information without representation.
So the contents of conscious observation are states that correspond to states of the environment in the brain (i.e.: measurements). But how do these states in the brain arise? The issue that arises here is whether the representation, the contents of consciousness, is entirely due to the environment or due to some degree to the form of conscious observation. Suppose we make the reasonable assumption that conscious observation is due to some physical field in the dendrites of neurons rather than in the action potentials that transmit the state of the neurons from place to place. This field would not necessarily be constrained by decoherence; there are many possibilities for the field, for instance, it could be a radio frequency field due to impulses or some other electromagnetic field (cf: Anglin & Zurek (1996)) or some quantum state of macromolecules etc.. Such a field might contain many superposed possibilities for the state of the underlying neurons and although these would not affect sensations, they could affect the firing patterns of neurons and create actions in the world that are not determined by the environmental "preferred basis".
Zeh (2000) provides a mature review of the problem of conscious observation. For example he realises that memory is not the same as consciousness:
"The genuine carriers of consciousness ... must not in general be expected to represent memory states, as there do not seem to be permanent contents of consciousness."
and notes of memory states that they must enter some other system to become part of observation:
"To most of these states, however, the true physical carrier of consciousness somewhere in the brain may still represent an external observer system, with whom they have to interact in order to be perceived. Regardless of whether the ultimate observer systems are quasi-classical or possess essential quantum aspects, consciousness can only be related to factor states (of systems assumed to be localized in the brain) that appear in branches (robust components) of the global wave function — provided the Schrodinger equation is exact. Environmental decoherence represents entanglement (but not any “distortion” — of the brain, in this case), while ensembles of wave functions, representing various potential (unpredictable) outcomes, would require a dynamical collapse (that has never been observed)."
However, Zeh (2003) points out that events may be irreversibly determined by decoherence before information from them reaches the observer. This might give rise to a multiple worlds and multiple minds mixture for the universe, the multiple minds being superposed states of the part of the world that is the mind. Such an interpretation would be consistent with the apparently epiphenomenal nature of mind. A mind that interacts only weakly with the consensus physical world, perhaps only approving or rejecting passing actions would be an ideal candidate for a QM multiple minds hypothesis.
Further reading and referencesEdit
• Pearl, P. (1997). True collapse and false collapse. Published in Quantum Classical Correspondence: Proceedings of the 4th Drexel Symposium on Quantum Nonintegrability, Philadelphia, PA, USA, September 8-11, 1994, pp. 51-68. Edited by Da Hsuan Feng and Bei Lok Hu. Cambridge, MA: International Press, 1997.
• Zeh, H. D. (1979). Quantum Theory and Time Asymmetry. Foundations of Physics, Vol 9, pp 803-818 (1979).
• Zeh, H.D. (2000) THE PROBLEM OF CONSCIOUS OBSERVATION IN QUANTUM MECHANICAL DESCRIPTION. Epistemological Letters of the Ferdinand-Gonseth Association in Biel (Switzerland) Letter No 63.0.1981, updated 2000.
• Zeh, H.D. (2003). Decoherence and the Appearance of a Classical World in Quantum Theory, second edition, Authors:. E. Joos, H.D. Zeh, C. Kiefer D. Giulini, J. Kupsch, and I.-O. Stamatescu. Chapter 2: Basic Concepts and their Interpretation. |
b9cd9ba2d1c4473c |
Open Access Nano Express
Weak and strong confinements in prismatic and cylindrical nanostructures
Yuri V Vorobiev1*, Bruno Mera2, Vítor R Vieira2, Paul P Horley3 and Jesús González-Hernández3
Author Affiliations
1 CINVESTAV-Querétaro, Libramiento Norponiente 2000, Fracc. Real de Juriquilla, Querétaro, QRO, 76230, Mexico
2 Centro de Física das Interacções Fundamentais, Instituto Superior Técnico, Universidade Técnica de Lisboa, Avenida Rovisco Pais, Lisbon, 1049-001, Portugal
3 CIMAV Chihuahua/Monterrey, 120 Avenida Miguel de Cervantes, Chihuahua, CHIH, 31109, Mexico
For all author emails, please log on.
Received:16 April 2012
Accepted:19 June 2012
Published:5 July 2012
© 2012 Vorobiev et al.; licensee Springer.
Cylindrical nanostructures, namely, nanowires and pores, with rectangular and circular cross section are examined using mirror boundary conditions to solve the Schrödinger equation, within the effective mass approximation. The boundary conditions are stated as magnitude equivalence of electron's Ψ function in an arbitrary point inside a three-dimensional quantum well and image point formed by mirror reflection in the walls defining the nanostructure. Thus, two types of boundary conditions - even and odd ones - can be applied, when Ψ functions in a point, and its image, are equated with the same and the opposite signs, correspondingly. In the former case, the Ψ function is non-zero at the boundary, which is the case of a weak confinement. In the latter case, the Ψ function vanishes at the boundary, corresponding to strong quantum confinement. The analytical expressions for energy spectra of electron confined within a nanostructure obtained in the paper show a reasonable agreement with the experimental data without using any fitting parameters.
Nanostructures (NS) of different kinds have been actively studied during the last two decades, both theoretically and experimentally. A special interest was focused on quasi-one-dimensional NS such as nanowires, nanorods, and elongated pores that not only modify the main material's parameters, but are also capable of introducing totally new characteristics such as optical and electrical anisotropy, birefringence, etc. In particular, the existence of nanoscale formations on the surface (or embedded into semiconductor) result in quantum confinement effects. As the motion of the carriers (or excitons) becomes restrained, their energy spectra change, moving the permitted energy levels towards higher energies as a consequence of confinement. In the experimental measurements, such modification would be noticed as a blueshift of energy-related characteristics, such as, for example, the edge of absorption. This paper is dedicated to the theoretical investigation of confined particle problem, aiming to explain the available experimental data basing on geometry of corresponding nanoparticles present in the particular material. Here, we focus on elongated NS that can be approximated as prisms or cylinders with different shapes of cross section.
The theoretical treatment of NS is based on the solution of the Schrödinger equation, usually within the effective mass approximation [1-4], although for small NS, such approach can be questioned because the symmetry describing a nanoparticle may not inherit its shape symmetry but would rather depend on atomistic symmetry [5]. In addition, at small scale, it becomes necessary to take into account atomic relaxation and piezoelectric phenomena [6] that may strongly influence the energy states of confined particles and split their energy levels. The detailed consideration of these phenomena can be accounted using the pseudopotential method [7] introduced by Zunger's group that, after a decade, became a standard energy level model for detailed description of quantum dots. However, in cases when dimensions of nano-objects are large enough to validate the effective mass approximation, it is possible to obtain analytical solution to the problem of a particle confined within a quantum dot.
An important element of the quantum mechanical description is the boundary conditions; the traditional impenetrable wall conditions (1) are not always realistic and, (2) in many cases (depending on the shape of NS), could not be written in simple analytical form, thus complicating the further analysis. To overcome these problems, we proposed to use a mirrorlike boundary condition [8-10] assuming that the electron confined in an NS is specularly reflected by its walls acting as mirrors. In addition to a significant simplification of problem solution, this method favors the effective mass approximation.
Within the same framework, one can study pores as ‘inverted’ nanostructures (i.e., a void surrounded by semiconductor material) considering the ‘reflection’ of the particle's wave function from the surfaces limiting a pore. Thus, one will obtain essentially the same solution of the Schrödinger equation (and the energy spectrum) for both the pore and NS of the same geometry and size. A previous attempt to treat walls of a quantum system as mirrors in quantum billiard problem [11] yielded quite a complicated analytical form of the boundary conditions that made the solution of Schrödinger equation considerably more difficult.
In our treatment of the NS boundary as a mirror, the boundary condition equalizes absolute values of the particle's Ψ function in an arbitrary point inside the NS and the corresponding image point with respect to a mirror-reflective wall. Thus, depending on the sign of the equated Ψ values, one will obtain even and odd mirror boundary conditions. For the case of odd mirror boundary conditions (OMBC), Ψ functions in real point and its images should have the opposite sign, which means that the incident and reflected de Broglie waves cancel each other at the boundary. This case is equivalent to the impenetrable walls with vanishing Ψ function at the boundary, representing a ‘strong’ confinement case. However, some experimental data (see, e.g., [4]) show the evidence that a particle may penetrate the barrier, later returning into the confined volume. Thus, the wave function will not vanish at the boundary, and the system should be considered as a ‘weak’ confinement case as long as the particle flux through the boundary is absent. This case corresponds to even mirror boundary conditions (EMBC), when Ψ function in real point and its images are the same. Below, we analyze solutions of the Schrödinger equation for several cylindrical structures, using mirror boundary conditions of both types and making comparison of the energy spectra obtained with experimental data found in the literature.
We start with the simplest case that could be easily treated on the basis of traditional approach - a NS shaped as a rectangular prism with a square base (with the sides a = b oriented along the axes x and y; the side c > a is set along the z direction). Assuming, as it is usually done in the literature, the absence of a potential inside the NS and separating the variables, we look for the solution of the stationary Schrödinger equation ΔΨ + k2 Ψ = 0 (where k2 = 2 mE/ħ2 and m being the particle's effective mass) as the product of plain waves propagating in both directions along the coordinate axes:
For this case, the even mirror boundary conditions are as follows [10]:
That renders the following solution (Equation 1) of the Schrödinger equation:
with wave vector components
It gives the following energy spectrum:
The odd mirror boundary conditions are obtained from Equation 2 by inverting the sign of the left-hand-side function. The solution will then be as follows:
The wave vector components will be the same as that presented in Equation 4, yielding the same energy spectrum (Equation 5). Using the traditional impenetrable wall boundaries, one will also obtain the solution in the form (Equation 6) that coincides with the OMBC solution that has a vanishing Ψ function at the boundary. Therefore, the energy spectrum is the same for both types of mirror boundary conditions and impenetrable wall boundary, although the solutions themselves are not equal. In [7], we demonstrated that for NS of spherical shape, the energy spectrum found with EMBC (weak confinement) is different from that corresponding to impenetrable walls conditions.
From Equation 5, it is evident that the energy spectrum of prismatic (cylindrical) NS is a sum of the spectra corresponding to the two-dimensional cross-section NS (a square with side length a) and the one-dimensional wire of length c. In a similar manner, the spectrum for cylinders with other cross-section shapes can be constructed using the solutions for two-dimensional triangular or hexagonal structures analyzed previously [8,9]. Below, we present the analysis of cylindrical NS.
Let us consider a nanostructure with a circular cross section of diameter a and cylinder height c. The solution of the problem using a traditional approach can be found in [12,13]. In our case, we make variable separation in cylindrical coordinates:
We note that the value of p defines the angular momentum: L = pħ. In the case of EMBC, one can apply mirror reflection from the base, which gives B = C, resulting in the following wave function:
Strong confinement (OMBC) gives B = −C, which introduces sinkz instead of coskz in Equation 7A.
The radial function F(r) is the solution of the following radial equation:
It is Bessel's differential equation regarding the variables kr, the solution of which is given by the cylindrical Bessel function of integer order |p|: J|p|(kr); with, k = ħ−1(2mEn)1/2. Here, m is the effective mass of the particle, and En is the quantized kinetic energy corresponding to the motion in two-dimensional circular quantum well. The total energy consists of energy contribution for the motion within cross-section plane and along the vertical axis z: E = En + Ez.
The energy En depends on the values of k and is obtained using boundary conditions. In the traditional case of impenetrable walls, the Ψ function vanishes at the boundary so that the energy values are determined by the roots (nodes) of the cylindrical Bessel function (see Figure 1 for different order numbers n, and also Table 1). The same situation will take place for OMBC, yielding zero wave function at the boundary so that the nodes q|p|i of the Bessel function will define the energy values.
thumbnailFigure 1. Cylindrical Bessel functionsJn(x). Curve numbers correspond to order n.
Table 1. Argument values at nodes and extremes of cylindrical Bessel function
If the EMBC are used, the situation becomes different since the function values in the points approaching the boundary of the nanostructure should match those in the image points, making the boundary to correspond to the extremes of the Bessel function (which was strictly proved for the spherical quantum dots (QDs) [10]).
Table 1 gives several values of the Bessel function argument kr corresponding to the function nodes (q|p|i) and extremes (t|p|i) calculated for function orders 0, 1, 2, and 3.
At the boundary, r = a/2; therefore, the corresponding value of k is 2q|p|i/a for OMBC and 2 t|p|i/a for EMBC. The energy spectrum for a particle confined in a circular-shaped quantum well is as follows:
Here, the parameter s|p|i takes the values of q|p|i for OMBC (strong confinement) and t|p|i for EMBC (weak confinement).
The quantization along the z axis for both the boundary condition types will be <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a>, yielding the total energy
In the case of EMBC, the ground state (GS) energy will be obtained with t11 = 1.625:
In the OMBC case, the GS will be determined by the smallest q value of 2.4:
Equations 10, 11, and 11A can be used for the analysis of optical processes in the NS discussed. In particular, blueshift in exciton ground state can be found from Equations 11 and 11A if one substitutes a reduced exciton mass in place of particle mass m. Using Equation 10, it is possible to obtain in a similar way the energies corresponding to the higher excited states.
For long NS with sufficiently large c, the second term in energy does not affect the GS. Thus, the solution for cylindrical NS based on even mirror boundary conditions EMBC (weak confinement) gives the GS shift due to quantum confinement that is (2.4/1.625)2 = 2.18 times smaller than the value obtained for the strong confinement case. In the case of spherical QD [10], the difference was four times. It is reasonable that for strong confinement, the blue shift value exceeds that obtained for the weak confinement case. To illustrate this, we present in Figure 2 the comparison of ground state energy obtained with OMBC and EMBC (using Equations 11 and 11A) on NS diameter for a cylindrical quantum well with parameters of silicon (effective mass for electron 0.26 and 0.49 for a hole, which corresponds to reduced exciton mass of 0.17; bandgap is 1.1 eV for 300 K). As one can see from the figure, the difference of the exciton bandgap scales down with increase of the NS diameter, with invariably higher values observable for the strong confinement case described by OMBC.
thumbnailFigure 2. Dependence of ground state energy on diameter of a cylindrical nanostructure. The plot shows the data obtained with odd and even mirror boundary conditions for an NS with parameters of silicon.
The choice of OMBC or EMBC has to be made taking into account the probability of electron tunneling through the walls forming the nanostructure. One can expect that in the case of isolated NS strong confinement (OMBC), approximation will be more appropriate, whereas for NS surrounded by other solid or liquid media (core-shell QDs [10] and pores in semiconductor media), weak confinement with EMBC should be used.
Results and discussion
Considerable scientific interest has been attracted to semiconductor nanorods (nanowires) and cylindrical pores. Let us mention here publications dealing with arrays of cylindrical pores in sapphire [14], ZnO nanorods grown within these pores [15], as well as CuS and In2O3 nanowires. Usually, the experiments report on relatively large structures measuring 30 nm or more in diameter. As one can see from Equations 11 and 11A, in these cases, the expected blueshift will be about 0.01 eV or less for both the weak and strong confinements. Nevertheless, there exists literature data referring to nanorods of sufficiently small diameter for a pronounced confinement effect.
A paper [16] reports on CdS nanorods with a diameter of 5 nm and a length of 40 nm embedded into a liquid crystal. The authors study the optical anisotropy caused by the alignment of the nanorods. To determine it, they measure polarization of photoluminescence due to electron–hole recombination, reporting that the spectral maximum of luminescence is located at 485 nm (2.56 eV), which exceeds the bandgap of the bulk CdS by 0.14 eV. Taking the electron effective mass in CdS [17] as 0.16 m0 and hole effective mass 0.53 m0, one can find the reduced mass μ = 0.134 m0 and the blueshift 0.12 eV using Equation 11, which agrees reasonably with the experiment. As CdS nanostructure is surrounded by liquid crystal media, we were using the EMBC or weak confinement approximation.
Another study [18] is focused on the optical properties of CuS nanorods measuring 6 to 8 nm in diameter and 40 to 60 nm in length; the authors report definite blueshift of fundamental absorption edge. Alas, we found no data on the effective masses for CuS, so it was not possible to make numerical comparison with the theory.
A particular example of cylindrical QDs is presented by quasi-circular organic molecules like coronene C24H12 (see Figure 3). In this case c < < a, which makes the second term in Equations 10, 11, and 11A very large even for nz = 1, meaning that it has no contribution to the optical properties of the molecule in visible light because the transitions between the states with different nz will correspond to radiation in deep ultraviolet. Therefore, the spectrum is defined by the first term in Equations 10 and 11 that essentially replicates the solution obtained for the case of a long cylinder.
thumbnailFigure 3. Coronene molecule (a) formula and (b) computer-rendered three-dimensional image.
Another paper [19] presents the experimental data concerning the optical properties of coronene molecules in tetrahydrofuran (THF) solution. Since the molecules are submerged into media, we expect that weak confinement/EMBC will be most appropriate for solution of the problem. Strong absorption lines were registered at photon energies of 4.1 to 4.3 eV, with weaker absorption down to 3.5 eV. To use our methodology, one should first determine the diameter a of a circle embracing the molecule with its 12 atoms of carbon (Figure 3).
The C-C bond length in coronene is d = 1.4 AǺ, which corresponds to the side of a hexagon. Thus, one would have a = <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a> = 0.741 nm. Taking in (Equation 11) m as free electron mass and using only the first term, we obtain the ground state energy EGS= 0.73 eV. The higher energy states (Equation 10) will be defined by the values of s|p|i = t|p|i equal to 2.92, 3.713, 4.30 etc. The corresponding energies are 2.353, 3.805, and 5.1 eV that result in transition energies 1.62, 3.1, and 4.37 eV. The first value is out of the spectral range investigated in [19]; the other two could reasonably fit the absorption observed.
If we attempt to treat the case on the basis of strong confinement approximation (OMBC), one should use the q|p|i values in the formulas (Equations 10 and 11A), yielding the ground state of 1.591 eV and excited states at 3.78, 7.21, and 8.35 eV. Therefore, the transition energies would be 2.19, 5.62, and 6.76 eV which have nothing in common with the experimental values, proving that the previous conclusion to use EMBC based on the fact that coronene molecules are embedded into THF medium was the right one.
Yet, another paper [20] is devoted to studying coronene-like nitride molecules with the composition N12X12H12, where X can be B, Al, Ga or In. Depending on X, the bond length will vary, giving different values of well diameter a. The authors of [20] give the transition energies between the ground state and the first excited state, corresponding to HOMO-LUMO transition EHL. For these isolated molecules, the strong confinement case/OMBC is expected to be appropriate. The bond lengths and EHL values reported in [20] are listed in Table 2 together with values of a calculated from bond length and the transition energies ΔE found using the expression (Equation 10) with corresponding q values. One can see that ΔE values are reasonably close to the experimental EHL. Solution of the same problem using weak confinement/EMBC results in large discrepancies that fails to explain the experimental data, confirming the correctness of the decision to choose OMBC for isolated molecules.
Table 2. The lowest transition energies in coronene-like molecules
Theoretical description of prismatic and cylindrical nanostructures (including pores in semiconductor) is made using two types of mirror boundary conditions for solution of the Schrödinger equation, resulting in simple analytical procedure to obtain wave functions that offer reasonably good description of optical properties of nanostructures of various shapes. The expressions for energy spectra are defined by the geometry and dimensions of the nanostructures. The even mirror boundary conditions correspond to weak confinement that is applicable for the cases when the nanostructure is embedded into another media (which is especially true for a case of a pore) that enables tunneling through the boundary of the nanostructure. In contrast, odd mirror boundary conditions are more appropriate in the treatment of isolated nanostructures where strong confinement exists. Both cases are illustrated with experimental data, proving good applicability of the corresponding type of boundary conditions.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
YVV and VRV performed calculations and drafted the manuscript. BM helped in drafting the manuscript. PPH and JG-H provided helpful discussions and improvement for the manuscript. All authors read and approved the final manuscript.
The authors thank the FCT Projeto Estratégico PEst-OE/FIS/UI0091/2011 (Portugal) and CONACYT Basic Science Project 129269 (Mexico).
1. Efros AL, Efros AL: Interband absorption of light in a semiconductor sphere.
Sov. Phys. Semicond 1982, 16(7):772-775. OpenURL
2. Gaponenko SV: Optical Properties of Semiconductor Nanocrystals. Cambridge University Press, Cambridge; 1998. OpenURL
3. Liu JL, Wu WG, Balandin A, Jin GL, Wang KL: Intersubband absorption in boron-doped multiple Ge quantum dots.
Appl Phys Lett 1999, 74:185-187. Publisher Full Text OpenURL
4. Dabbousi BO, Rodriguez-Viejo J, Mikulec FV, Heine JR, Mattoussi H, Ober R, Jensen KF, Bawendi MG: (CdSe)ZnS Core-shell quantum dots: synthesis and characterization of a size series of highly luminescent nanocrystallites.
J Phys Chem B 1977, 101:9463-9475. OpenURL
5. Bester G, Zunger A: Cylindrically shaped zinc-blende semiconductor quantum dots do not have cylindrical symmetry: atomistic symmetry, atomic relaxation, and piezoelectric effects.
Physical Review B 2005, 71:045318. OpenURL
6. Bester G, Wu X, Vanderbilt D, Zunger A: Importance of second-order piezoelectric effects in zinc-blende semiconductors.
Phys Rev Lett 2006, 96:187602. PubMed Abstract | Publisher Full Text OpenURL
7. Zunger A: Pseudopotential theory of semiconductor quantum dots.
Phys. Stat. Sol. B 2001, 224:727-734. Publisher Full Text OpenURL
Phys. Stat. Sol. C. 2008, 5:3802-3805. Publisher Full Text OpenURL
Science in China Series E: Technological Sciences. 2009, 52:15-18. Publisher Full Text OpenURL
Physica E 2010, 42:2264-2267. Publisher Full Text OpenURL
11. Liboff RL, Greenberg J: The hexagon quantum billiard.
J Stat Phys 2001, 105:389-402. Publisher Full Text OpenURL
12. Robinett RW: Visualizing the solutions for the circular infinite well in quantum and classical mechanics.
Am J Phys 1996, 64(4):440-446. Publisher Full Text OpenURL
13. Mel’nikov LA, Kurganov AV: Model of a quantum well rolled up into a cylinder and its applications to the calculation of the energy structure of tubelene.
Tech Phys Lett 1997, 23(1):65-67. Publisher Full Text OpenURL
14. Choi J, Luo Y, Wehrspohn RB, Hilebrand R, Schilling J, Gösele U: Perfect two-dimensional porous alumina photonic crystals with duplex oxide layers.
J Appl Phys 2003, 94(4):4757-4762. OpenURL
15. Zheng MJ, Zhang LD, Li GH, Shen WZ: Fabrication and optical properties of large-scale uniform zinc oxide nanowire arrays by one-step electrochemical deposition technique.
Chem Phys Lett 2002, 363:123-128. Publisher Full Text OpenURL
16. Wu K-J, Chu K-C, Chao C-Y, Chen YF, Lai C-W, Kang CC, Chen C-Y, Chou P-T: CdS nanorods embedded in liquid crystal cells for smart optoelectronic devices.
Nano Lett 2007, 7(1):1908-1913. OpenURL
17. Singh J: Physics of Semiconductors and Their Heterostructures. McGraw-Hill, New York; 1993. OpenURL
18. Freeda MA, Mahadevan CK, Ramalingom S: Optical and electrical properties of CuS nanorods.
Archives of Physics Research. 2011, 2(3):175-179. OpenURL
19. Xiao J, Yang H, Yin Z, Guo J, Boey F, Zhang H, Zhang O: Preparation, characterization and photoswitching/light-emitting behaviors of coronene nanowires.
J Mater Chem 2011, 21:1423-1427. Publisher Full Text OpenURL
20. Chigo Anota E, Salazar Villanueva M, Hernández Cocoletzi H: Electronic properties of group III-A nitride sheets by molecular simulation.
Physica Status Solidi C 2010, 7:2252-2254. Publisher Full Text OpenURL |
7e30cc13fc5dcb71 | Take the 2-minute tour ×
From what I remember in my undergraduate quantum mechanics class, we treated scattering of non-relativistic particles from a static potential like this:
1. Solve the time-independent Schrodinger equation to find the energy eigenstates. There will be a continuous spectrum of energy eigenvalues.
2. In the region to the left of the potential, identify a piece of the wavefunction that looks like $Ae^{i(kx - \omega t)}$ as the incoming wave.
3. Ensure that to the right of the potential, there is not piece of the wavefunction that looks like $Be^{-i(kx + \omega t)}$, because we only want to have a wave coming in from the left.
4. Identify a piece of the wavefunction to the left of the potential that looks like $R e^{-i(kx + \omega t)}$ as a reflected wave.
5. Identify a piece of the wavefunction to the right of the potential that looks like $T e^{i(kx - \omega t)}$ as a transmitted wave.
6. Show that $|R|^2 + |T|^2 = |A|^2$. Interpret $\frac{|R|^2}{|A|^2}$ as the probability for reflection and $\frac{|T|^2}{|A|^2}$ as the probability for transmission.
This entire process doesn't seem to have anything to do with a real scattering event - where a real particle is scattered by a scattering potential - we do all our analysis on a stationary waves. Why should such a naive procedure produce reasonable results for something like Rutherford's foil experiment, in which alpha particles are in motion as they collide with nuclei, and in which the wavefunction of the alpha particle is typically localized in a (moving) volume much smaller than the scattering region?
share|improve this question
Essentially because the dynamical problem only interests you in the limit where $T_i \to \infty$, $T_f \to \infty$ and by Lippmann-Schwinger equation it can be shown that all you need to do is to match the asymptotic states of the time-independent Hamiltonian (which is precisely what you describe, although nobody will tell you this in the undergraduate class). This can be developed more fully into the S-matrix theory, fundamental to all of scattering problems. I'll see if I can get to a more complete answer later. – Marek Jul 22 '11 at 11:50
This really bothered me too when I first took quantum mechanics. – Ted Bunn Jul 22 '11 at 14:11
6 Answers 6
up vote 9 down vote accepted
This is fundamentally no more difficult than understanding how quantum mechanics describes particle motion using plane waves. If you have a delocalized wavefunction $\exp(ipx)$ it describes a particle moving to the right with velocity p/m. But such a particle is already everywhere at once, and only superpositions of such states are actually moving in time.
$$\int \psi_k(p) e^{ipx - iE(p) t} dp$$
where $\psi_k(p)$ is a sharp bump at $p=k$, not a delta-function, but narrow. The superposition using this bump gives a wide spatial waveform centered about at x=0 at t=0. At large negative times, the fast phase oscillation kills the bump at x=0, but it creates a new bump at those x's where the phase is stationary, that is where
$${\partial\over\partial p}( p x - E(p)t ) = 0$$
or, since the superposition is sharp near k, where
$$ x = E'(k)t$$
which means that the bump is moving with a steady speed as determined by Hamilton's laws. The total probability is conserved, so that the integral of psi squared on the bump is conserved.
The actual time-dependent scattering event is a superposition of stationary states in the same way. Each stationary state describes a completely coherent process, where a particle in a perfect sinusoidal wave hits the target, and scatters outward, but because it is an energy eigenstate, the scattering is completely delocalized in time.
If you want a collision which is localized, you need to superpose, and the superposition produces a natural scattering event, where a wave-packet comes in, reflects and transmits, and goes out again. If the incoming wavepacked has an energy which is relatively sharply defined, all the properties of the scattering process can be extracted from the corresponding energy eigenstate.
Given the solutions to the stationary eigenstate problem $\psi_p(x)$ for each incoming momentum $p$, so that at large negative x, $\psi_p(x) = exp(ipx) + A \exp(-ipx)$ and $\psi_p(x) = B\exp(ipx)$ at large positive x, superpose these waves in the same way as for a free particle
$$\int dp \psi_k(p) \psi_p(x) e^{-iE(p)t}$$
At large negative times, the phase is stationary only for the incoming part, not for the outgoing or reflected part. This is because each of the three parts describes a free-particle motion, so if you understand where free particle with that momentum would classically be at that time, this is where the wavepacket is nonzero. So at negative times, the wavepacket is centered at
$$ x = E'(k)t$$
For large positive t, there are two places where the phase is stationary--- those x where
$$ x = - E'(k) t$$
$$ x = E_2'(k) t$$
Where $E_2'(k)$ is the change in phase of the transmitted k-wave in time (it can be different than the energy if the potential has an asymptotically different value at $+\infty$ than at $-\infty$). These two stationary phase regions are where the reflected and transmitted packet are located. The coefficient of the reflected and transmitted packets are A and B. If A and B were of unit magnitude, the superposition would conserve probability. So the actual transmission and reflection probability for a wavepacket is the square of the magnitude of A and of B, as expected.
share|improve this answer
First suppose that the Hamiltonian $H(t) = H_0 + H_I(t)$ can be decomposed into free and interaction parts. It can be shown (I won't derive this equation here) that the retarded Green function for $H(t)$ obeys the equation $$G^{(+)}(t, t_0) = G_0^{(+)}(t, t_0) - {i \over \hbar} \int_{-\infty}^{\infty} {\rm d} t' G_0^{(+)}(t,t') H_I(t') G^{(+)}(t', t_0)$$ where $G_0^{(+)}$ is the retarded Green function for $H_0$. Letting this equation act on a state $\left| \psi(t_0) \right>$ this becomes $$\left| \psi(t) \right> = \left| \varphi(t) \right> - {i \over \hbar} \int_{-\infty}^{\infty} {\rm d} t' G_0^{(+)}(t,t') H_I(t')\left| \psi(t') \right> $$ where $\varphi(t) = G_0^{(+)}(t,t') \left| \psi(t_0) \right>$. Now, we suppose that until $t_0$ there is no interaction and so we can write $\left |\psi(t_0) \right>$ as superposition of momentum eigenstates $$\left| \psi(t_0) \right> = \int {\rm d}^3 \mathbf p a(\mathbf p) e^{-{i \over \hbar} E t_0} \left| \mathbf p \right>.$$ A similar decomposition will also hold for $\left| \phi(t) \right>$. This should inspire us in writing $\left| \psi(t) \right >$ as $$\left| \psi(t) \right> = \int {\rm d}^3 \mathbf p a(\mathbf p) e^{-{i \over \hbar} E t} \left| \psi^{(+)}_{\mathbf p} \right>$$ where the states $\left| \psi^{(+)}_{\mathbf p} \right>$ are to be determined from the equation for $\left|\psi(t) \right>$. Now, the amazing thing (which I again won't derive due to the lack of space) is that these states are actually eigenstates of $H$: $$H \left| \psi^{(+)}_{\mathbf p} \right> = E \left| \psi^{(+)}_{\mathbf p} \right>$$ for $E = {\mathbf p^2 \over 2m}$ (here we assumed that the free part is simply $H_0 = {{\mathbf p}^2 \over 2m}$ and that $H_I(t)$ is independent of time).
Similarly, one can derive advanced eigenstates from advanced Green function $$H \left| \psi^{(-)}_{\mathbf p} \right> = E \left| \psi^{(-)}_{\mathbf p} \right>.$$
Now, in one dimension and for an interaction Hamiltonian of the form $\left< \mathbf x \right| H_I \left| \mathbf x' \right> = \delta(\mathbf x - \mathbf x') U(\mathbf x)$ it can be further shown that $$\psi^{(+)}_p \sim \begin{cases} e^{{i \over \hbar}px} + A(p) e^{-{i \over \hbar}px} \quad x< -a \cr B(p)e^{{i \over \hbar}px} \quad x> a \end{cases}$$ where $a$ is such that the potential vanishes for $|x| > a$ and $A(p)$ and $B(p)$ are coefficients fully determined by the potential $U(x)$. Similar discussion again applies for wavefunctions $\psi^{(-)}_p$. Thus we have succeeded in reducing the dynamical problem into a stationary problem by writing the non-stationary states $\psi(t, x)$ in the form of stationary $\psi^{(+)}_p(x)$.
share|improve this answer
-1 This answer is no good. You are turning off the scattering potential at $t=-\infty$ for no reason, the Hamiltonian in a scattering problem of the sort the OP is asking about is time independent. The answer is ridiculously formal, and all the interesting things are in the "it can be shown...". – Ron Maimon Aug 17 '11 at 2:32
@Ron: I don't quite understand your objection. Physically, the $t = -\infty$ part of the potential never matters in a scattering problem since particles are infinitely away from the potential (that is usually generated by their being close anyway). So this is only technicallity that I prefer to work with that doesn't change anything (rather, it's very convenient in more general situations). As for the "it can be shown" parts... well, I can show them but the answer would be twice as long. Will you remove the downvote if I include the derivations? And as for being formal... so what? – Marek Aug 17 '11 at 6:26
The answer to this is the same as the answer to why you solve the Time-Independent-Schrodinger-Equation to find the time evolution of a bound particle. First you solve the TISE to find the stationary states $\psi_n$, then you write the particle's wavefunction $\Psi(t=0)$ in terms of a superposition of the $\psi_n$. Since you know how the stationary states evolve in time, you now know (at least in principle) how ANY wavefunction evolves in time.
It's the same thing for scattering. You figure out what happens for the energy eigenstates, and now you know what will happen for any wavepacket (which you would write as a superposition of energy eigenstates, of course). And here it's even easier than the bound states: if all you care about is R and T, and your wavepacket has a narrow range of energies (for which T is nearly constant), then the value of T for your wavepacket is the same as what you just calculated for the energy eigenstate. Huzzah!
If your wavepacket involves a superpostion of a wide range of energies, with a wide range of T's, then your life will be more complicated, of course. But in scattering experiments, folks usually try to employ nearly monoenergetic beams.
Because quantum mechanics classes spend so much time mired in the details of solving the TISE (either for scattering or bound states), they often lose sight of one of the motivations for solving the TISE: it's a tool for finding the time behavior of any initial condition.
share|improve this answer
I'm baffled by @Marek's statement that the Hamiltonian is explicitly time-dependent. It certainly doesn't need to be and often isn't. For instance, Rutherford scattering: $H=p^2/(2m)+q_1q_2/(4\pi\epsilon_0r)$. Note the absence of time dependence. In a scattering situation, the wavefunction is time-dependent, not generally the Hamiltonian. In any situation in which the Hamiltonian is explicitly time-dependent, the procedure described in the original question wouldn't work, so in the context of this question we're certainly assuming time-independent Hamiltonians. – Ted Bunn Jul 22 '11 at 18:49
@Ted: also note that the process Mark describes is not what AC describes in his answer. We don't evolve solutions in time at all. To give complete justification one needs to proceed as in the usual scattering theory (which is best dealt with in the Dirac picture and not Schrodinger picture). This is a huge subject and it certainly is not about simple solving of TISE (even though it can be reduced to this sometimes)... – Marek Jul 22 '11 at 19:36
I don't dispute any of this, but I don't think any of it is relevant to the question at hand. Note that it's explicitly about scattering from a static potential. One should be able to understand why the "usual" undergraduate quantum mechanics procedure for treating, e.g., Rutherford scattering, or scattering from a delta-function potential, or a square barrier gives the right answer. (Continued ...) – Ted Bunn Jul 22 '11 at 19:47
There's no need to introduce time-dependence in any of those cases: you could solve the time-dependent equation numerically for a wave packet, or you can solve the time-dependent Schrodinger equation analytically. As I understand it, Mark's question is why those two ways of treating the problem give the same answer. – Ted Bunn Jul 22 '11 at 19:49
@Ted: well, I was just trying to describe why the problem is about something else than simple solving of TISE. As for the real justification, I hinted at it in my comment under the question: it follows from the L-S equation. What AC describes is either another way of solving the scattering problem (and so irrelevant to the question) or a (wrong) justification of why the "usual" way works. Either way, I find this answer unsatisfactory. – Marek Jul 22 '11 at 20:12
There is already a detailed and correct derivation, in my answer I can try to address the qualitative side of "why". In a scattering problem, there is always a hierarchy of well-separated scales. In your example of an alpha particle in Rutherford experiment, you refer to localization in space which means a certain spread in the momentum/energy. However, as long as this spread is smaller than the characteristic energy scale on which the scattering amplitudes changes, the time-independent at well-defined energy should give correct results.
In terms of lengths this scale separation required for the time-independent picture to work is that the wave-packet of the alpha particle should be larger that the neighbourhood of the nucleus where the scattering happens. Typically, this is the case -- if it is not, the alpha particle is likely to have very uncertain (in Heisenberg sense) energy/momentum.
share|improve this answer
Here I would like to expand some of the arguments given in Ron Maimon's nice answer.
i) Let us divide the 1D $x$-axis into three regions $I$, $II$, and $III$, with a localized potential $V(x)$ in the middle region $II$ having a compact support. (Clearly, there are physically relevant potentials that hasn't compact support, e.g. the Coulomb potential, but this assumption simplifies the following discussion.)
ii) Time-independent and monochromatic. The particle is free in the regions $I$ and $III$, so we can solve the time-independent Schrödinger equation
$$\hat{H}\psi(x) ~=~E \psi(x), \qquad\qquad \hat{H}~=~ \frac{\hat{p}^2}{2m}+V(x),\qquad\qquad E> 0, \qquad\qquad (1)$$
exactly there. We know that the 2nd order linear ODE has two linearly independent solutions, which in the free regions $I$ and $III$ are plane waves
$$ \psi_{I}(x) ~=~ a^{+}_{I}(k)e^{ikx} + a^{-}_{I}(k)e^{-ikx}, \qquad\qquad k> 0, \qquad\qquad (2) $$ $$ \psi_{III}(x) ~=~ a^{+}_{III}(k)e^{ikx} + a^{-}_{III}(k)e^{-ikx}, \qquad\qquad (3) $$
Just from linearity of the Schrödinger equation, even without solving the middle region $II$, we know that the four coefficients $a^{\pm}_{I/III}(k)$ are constrained by two linear conditions. This observation leads, by the way, to the time-independent notion of the scattering $S$-matrix and the transfer $M$-matrix
$$ \begin{pmatrix} a^{-}_{I}(k) \\ a^{+}_{III}(k) \end{pmatrix}~=~ S(k) \begin{pmatrix} a^{+}_{I}(k) \\ a^{-}_{III}(k) \end{pmatrix}. \qquad\qquad (4) $$
$$ \begin{pmatrix} a^{+}_{III}(k) \\ a^{-}_{III}(k) \end{pmatrix}~=~ M(k) \begin{pmatrix} a^{+}_{I}(k) \\ a^{-}_{I}(k) \end{pmatrix}. \qquad\qquad (5) $$
see e.g. Griffiths' book, Introduction to Quantum Mechanics, Section 2.7, and this answer.
iii) Time-dependence of monochromatic wave. The dispersion relation reads
$$ \frac{E(k)}{\hbar} ~\equiv~\omega(k)~=~\frac{\hbar k^2}{2m}, \qquad\qquad (6) $$
The specific form $(6)$ of the dispersion relation will not matter in what follows. The full time-dependent monochromatic solution in the free regions I and III becomes $$ \Psi_r(x,t) ~=~ \sum_{\sigma=\pm}a^{\sigma}_r(k)e^{\sigma ikx-i\omega(k)t} ~=~\underbrace{e^{-i\omega(k)t}}_{\text{phase factor}} \Psi_r(x,0), \qquad r ~\in~ \{I, III\}. \qquad (7) $$
The solution $(7)$ is a sum of a right mover ($\sigma=+$) and a left mover ($\sigma=-$). For now the words right and left mover may be taken as semantic names without physical content. The solution $(7)$ is fully delocalized in the free regions I and III with the probability density $|\Psi_r(x,t)|^2$ independent of time $t$, so naively, it does not make sense to say that the waves are right or left moving, or even scatter. However, it turns out, we may view the monochromatic wave $(7)$ as a limit of a wave packet, and obtain a physical interpretation in that way, see next section.
iv) Wave packet. We now take a wave packet
$$ A^{\sigma}_r(k)~=~0 \qquad \text{for} \qquad |k-k_0| ~\geq~ \frac{1}{L}, \qquad\sigma~\in~\{\pm\}, \qquad r ~\in~ \{I, III\},\qquad (8) $$
narrowly peaked about some particular value $k_0$ in $k$-space,
$$|k-k_0| ~\leq~ K, \qquad\qquad (9)$$
where $K$ is some wave number scale, so that we may Taylor expand the dispersion relation
$$\omega(k)~=~ \omega(k_0) + v_g(k_0)(k-k_0) + {\cal O}\left((k-k_0)^2\right), \qquad\qquad (10) $$ and drop higher-order terms ${\cal O}\left((k-k_0)^2\right)$. Here
$$v_g(k)~:=~\frac{d\omega(k)}{dk}\qquad\qquad (11)$$
is the group velocity. The wave packet (in the free regions I and III) is a sum of a right and a left mover,
$$ \Psi_r(x,t)~=~ \Psi^{+}_r(x,t)+\Psi^{-}_r(x,t), \qquad\qquad r ~\in~ \{I, III\},\qquad\qquad (12) $$
$$ \Psi^{\sigma}_r(x,t)~:=~ \int dk~A^{\sigma}_r(k)e^{\sigma ikx-i\omega(k)t}, \qquad\qquad\sigma~\in~\{\pm\}, \qquad\qquad r ~\in~ \{I, III\}, $$ $$ ~\approx~ e^{i(k_0 v_g(k_0)-\omega(k_0))t} \int dk~A^{\sigma}_r(k)e^{ ik(\sigma x- v_g(k_0)t)}$$ $$~=~\underbrace{e^{i(k_0 v_g(k_0)-\omega(k_0))t}}_{\text{phase factor}} ~\Psi^{\sigma}_r(x-\sigma v_g(k_0)t,0).\qquad\qquad (13)$$
The right and left movers $\Psi^{\sigma}$ will be very long spread-out wave trains of sizes $\geq \frac{1}{K}$, but we are still able to identity via eq. $(13)$ their time evolution as just
1. a collective motion with group velocity $\sigma v_g(k_0)$, and
2. an overall time-dependent phase factor of modulus $1$, which is the same for the right and the left mover.
In the limit $K \to 0$, with $K >0$, the approximation $(10)$ becomes better and better, and we recover the time-independent monochromatic wave,
$$ A^{\sigma}_r(k) ~\longrightarrow ~a^{\sigma}_r(k_0)~\delta(k-k_0)\qquad \text{for} \qquad K\to 0. \qquad\qquad (14)$$
It thus makes sense to assign a group velocity to each of the $\pm$ parts of the monochromatic wave $(7)$, because it can understood as an appropriate limit of the wave packet $(13)$.
share|improve this answer
I also struggled to understand it myself. Why I think this confuses many people is that they try to interpret the time-independent scattering wavefunction as describing one single collision of a particle from the target and it is this interpretation which is not correct and leads to the confusion!
I think that the easiest way of seeing why the time-independent approach works lies in the definition of the scattering process which the wavefunction describes.
The time-independent scattering solution describes the situation in which the target is being continuously bombarded by a flux of non-interacting projectiles approaching with different impact parameters (this is how most of the scattering experiments work). Therefore the process you are trying to describe is stationary. This is the actual reason why the time-independent formulation works. You can see that from e.g. the classical book on scattering (Taylor: Scattering theory), where the scattering process is defined (Chapter 3, section d) very clearly in terms of the continuous flux of the incoming particles.
You can convince yourself that this interpretation of the time-independent scattering solution is indeed correct by simply noting that the probability flux (either incoming or outgoing) that you can calculate from the scattering wavefunction has the units of probability per unit time per unit area, i.e. it describes a stationary scattering process.
share|improve this answer
Your Answer
|
34ab7c6488e119d1 | Sturm-Liouville problem
Sturm-Liouville problem, or eigenvalue problem, in mathematics, a certain class of partial differential equations (PDEs) subject to extra constraints, known as boundary values, on the solutions. Such equations are common in both classical physics (e.g., thermal conduction) and quantum mechanics (e.g., Schrödinger equation) to describe processes where some external value (boundary value) is held constant while the system of interest transmits some form of energy.
In the mid-1830s the French mathematicians Charles-François Sturm and Joseph Liouville independently worked on the problem of heat conduction through a metal bar, in the process developing techniques for solving a large class of PDEs, the simplest of which take the form [p(x)y′]′ + [q(x) − λr(x)]y = 0 where y is some physical quantity (or the quantum mechanical wave function) and λ is a parameter, or eigenvalue, that constrains the equation so that y satisfies the boundary values at the endpoints of the interval over which the variable x ranges. If the functions p, q, and r satisfy suitable conditions, the equation will have a family of solutions, called eigenfunctions, corresponding to the eigenvalue solutions.
For the more-complicated nonhomogeneous case in which the right side of the above equation is a function, f(x), rather than zero, the eigenvalues of the corresponding homogeneous equation can be compared with the eigenvalues of the original equation. If these values are different, the problem will have a unique solution. On the other hand, if one of these eigenvalues matches, the problem will have either no solution or a whole family of solutions, depending on the properties of the function f(x). |
370c635b19801d91 | Advanced Mathematics for Engineers and Scientists/The Laplacian and Laplace's Equation
< Advanced Mathematics for Engineers and Scientists
The Laplacian and Laplace's EquationEdit
By now, you've most likely grown sick of the one dimensional transient diffusion PDE we've been playing with:
Make no mistake: we're not nearly done with this stupid thing; but for the sake of variety let's introduce a fresh new equation and, even though it's not strictly a separation of variables concept, a really cool quantity called the Laplacian. You'll like this chapter; it has many pretty pictures in it.
Graph of .
The LaplacianEdit
The Laplacian is a linear operator in Euclidean n-space. There are other spaces with properties different from Euclidean space. Note also that operator here has a very specific meaning. As a function is sort of an operator on real numbers, our operator is an operator on functions, not on the real numbers. See here for a longer explanation.
We'll start with the 3D Cartesian "version". Let . The Laplacian of the function is defined and notated as:
So the operator is taking the sum of the nonmixed second derivatives of with respect to the Cartesian space variables . The "del squared" notation is preferred since the capital delta can be confused with increments and differences, and is too long and doesn't involve pretty math symbols. The Laplacian is also known as the Laplace operator or Laplace's operator, not to be confused with the Laplace transform. Also, note that if we had only taken the first partial derivatives of the function , and put them into a vector, that would have been the gradient of the function . The Laplacian takes the second unmixed derivatives and adds them up.
In one dimension, recall that the second derivative measures concavity. Suppose ; if is positive, is concave up, and if is negative, is concave down, see the graph below with the straight up or down arrows at various points of the curve. The Laplacian may be thought of as a generalization of the concavity concept to multivariate functions.
This idea is demonstrated at the right, in one dimension: . To the left of , the Laplacian (simply the second derivative here) is negative, and the graph is concave down. At , the curve inflects and the Laplacian is . To the right of , the Laplacian is positive and the graph is concave up.
Concavity may or may not do it for you. Thankfully, there's another very important view of the Laplacian, with deep implications for any equation it shows itself in: the Laplacian compares the value of at some point in space to the average of the values of in the neighborhood of the same point. The three cases are:
• If is greater at some point than the average of its neighbors, .
• If is at some point equal to the average of its neighbors, .
• If is smaller at some point than the average of its neighbors, .
So the laplacian may be thought of as, at some point :
The neighborhood of .
The neighborhood of some point is defined as the open set that lies within some Euclidean distance δ (delta) from the point. Referring to the picture at right (a 3D example), the neighborhood of the point is the shaded region which satisfies:
Note that our one dimensional transient diffusion equation, our parallel plate flow, involves the Laplacian:
With this mentality, let's examine the behavior of this very important PDE. On the left is the time derivative and on the right is the Laplacian. This equation is saying that:
The rate of change of at some point is proportional to the difference between the average value of around that point and the value of at that point.
For example, if there's at some position a "hot spot" where is on average greater then its neighbors, the Laplacian will be negative and thus the time derivative will be negative, this will cause to decrease at that position, "cooling" it down. This is illustrated below. The arrows reflect upon the magnitude of the Laplacian and, by grace of the time derivative, the direction the curve will move.
Visualization of transient diffusion.
It's worth noting that in 3D, this equation fully describes the flow of heat in a homogeneous solid that's not generating it's own heat (like too much electricity through a narrow wire would).
Laplace's EquationEdit
Laplace's equation describes a steady state condition, and this is what it looks like:
Solutions of this equation are called harmonic functions. Some things to note:
• Time is absent. This equation describes a steady state condition.
• The absence of time implies the absence of an IC, so we'll be dealing with BVPs rather then IBVPs.
• In one dimension, this is the ODE of a straight line passing through the boundaries at their specified values.
• All functions that satisfy this equation in some domain are analytic (informally, an analytic function is equal to its Taylor expansion) in that domain.
• Despite appearances, solutions of Laplace's equation are generally not minimal surfaces.
• Laplace's equation is linear.
Laplace's equation is separable in the Cartesian (and almost any other) coordinate system. So, we shouldn't have too much problem solving it if the BCs involved aren't too convoluted.
Laplace's Equation on a Square: Cartesian CoordinatesEdit
Steady state conditions on a square.
Imagine a 1 x 1 square plate that's insulated top and bottom and has constant temperatures applied at its uninsulated edges, visualized to the right. Heat is flowing in and out of this thing steadily through the edges only, and since it's "thin" and "insulated", the temperature may be given as . This is the first time we venture into two spatial coordinates, note the absence of time.
Let's make up a BVP, referring to the picture:
So we have one nonhomogeneous BC. Assume that :
As with before, calling the separation constant in favor of just (or something) happens to make the problem easier to solve. Note that the negative sign was kept for the equation: again, these choices happen to make things simpler. Solving each equation and combining them back into :
At edge D:
Note that the constants can be merged, but we won't do it so that a point can be made in a moment. At edge A:
Taking as would satisfy this particular BC, however this would yield a plane solution of , which can't satisfy the temperature at edge C. This is why the constants weren't merged a few steps ago, to make it obvious that may not be . So, we instead take to satisfy the above, and then combine the three constants into one, call it :
Now look at edge B:
It should go without saying by now that can't be zero, since this would yield which couldn't satisfy the nonzero BC. Instead, we can take :
As of now, this solution will satisfy 3 of the 4 BCs. All that is left is edge C, the nonhomogeneous BC.
Neither nor can be contorted to fit this BC.
Since Laplace's equation is linear, a linear combination of solutions to the PDE is also a solution to the PDE. Another thing to note: since the BCs (so far) are homogeneous, we can add the solutions without worrying about nonzero boundaries adding up.
Though as shown above will not solve this problem, we can try summing (based on ) solutions to form a linear combination which might solve the BVP as a whole:
Assuming this form is correct (review Parallel Plate Flow: Realistic IC for motivation), let's again try applying the last BC:
It looks like it needs Fourier series methodology. Finding via orthogonality should solve this problem:
25 term partial sum of the series solution.
was changed to in the last step. Also, for integer , . Note that a Fourier sine expansion has been done. The solution to the BVP can finally be assembled:
That solves it!
It's finally time to mention that the BCs are discontinuous at the points and . As a result, the series should converge slowly at those points. This is clear from the plot at right: it's a 25 term partial sum (note that half of the terms are ), and it looks perfect except at , especially near the discontinuities at and .
Laplace's Equation on a Circle: Polar CoordinatesEdit
Now, we'll specify the value of on a circular boundary. A circle can be represented in Cartesian coordinates without too much trouble; however, it would result in nonlinear BCs which would render the approach useless. Instead, polar coordinates should be used, since in such a system the equation of a circle is very simple. In order for this to be realized, a polar representation of the Laplacian is necessary. Without going in to the details just yet, the Laplacian is given in (2D) polar coordinates:
This result may be derived using differentials and the chain rule; it's not difficult but it's a little long. In these coordinates Laplace's equation reads:
Note that in going from Cartesian to polar coordinates, a price was paid: though still linear, Laplace's equation now has variable coefficients. This implies that after separation at least one of the ODEs will have variable coefficients as well.
Let's make up the following BVP, letting :
This could represent a physical problem analogous to the previous one: replace the square plate with a disc. Note the apparent absence of sufficient BC to obtain a unique solution. The funny looking statement that u is bounded inside the domain of interest turns out to be the key to getting a unique solution, and it often shows itself in polar coordinates. It "makes up" for the "lack" of BCs. To separate, we as usual incorrectly assume that :
Once again, the way the negative sign and the separation constant are arranged makes the solution easier later on. These decisions are made mostly by trial and error.
The equation is probably one you've never seen before, it's a special case of the Euler differential equation (not to be confused with the Euler-Lagrange differential equation). There are a couple of ways to solve it, the most general method would be to change the variables so that an equation with constant coefficients is obtained. An easier way would be to note the pattern in the order of the coefficients and the order of the derivatives, and from there guess a power solution. Either way, the general solution to this simple case of Euler's ODE is given as:
This is a very good example problem since it goes to show that PDE problems very often turn into obscure ODE problems; we got lucky this time since the solution for was rather simple though its ODE looked pretty bad at first sight. The solution to the equation is:
Now, this is where the English sentence condition stating that u must be bounded in the domain of interest may be invoked. As , the term involving is unbounded. The only way to fix this is to take . Note that if this problem were solved between two concentric circles, this term would be nonzero and very important. With that term gone, constants can be merged:
Only one condition remains: on , yet there are 3 constants. Let's say for now that:
Then, it's a simple matter of equating coefficients to obtain:
Now, let's make the frequencies differ:
Equating coefficients won't work. However, if the IC were broken up into individual terms, the sum of the solution to the terms just happens to solve the BVP as a whole:
Verify that the solution above is really equal to the BC at :
And, since Laplace's equation is linear, this must solve the PDE as well. What all of this implies is that, if some generic function may be expressed as a sum of sinusoids with angular frequencies given by , all that is needed is a linear combination of the appropriate sum. Notated:
To identify the coefficients, substitute the BC:
The coefficients and may be determined by a (full) Fourier expansion on . Note that it's implied that must have period since we are solving this in a domain (a circle specifically) where .
You probably don't like infinite series solutions. Well, it happens that through a variety of manipulations it's possible to express the full solution of this particular problem as:
This is called Poisson's integral formula.
Derivation of the Laplacian in Polar CoordinatesEdit
Though not necessarily a PDEs concept, it is very important for anyone studying this kind of math to be comfortable with going from one coordinate system to the next. What follows is a long derivation of the Laplacian in 2D polar coordinates using the multivariable chain rule and the concept of differentials. Know, however, that there are really many ways to do this.
Three definitions are all we need to begin:
If it's known that , then the chain rule may be used to express derivatives in terms of and alone. Two applications will be necessary to obtain the second derivatives. Manipulating operators as if they meant something on their own:
Applying this to itself, treating the underlined bit as a unit dependent on and :
The above mess may be quickly simplified a little by manipulating the funny looking derivatives:
This may be made slightly easier to work with if a few changes are made to the way some of the derivatives are written. Also, the variable follows analogously:
Now we need to obtain expressions for some of the derivatives appearing above. The most direct path would use the concept of differentials. If:
Solving by substitution for and gives:
If , then the total differential is given as:
Note that the two previous equations are of this form (recall that and , just like above), which means that:
Equating coefficients quickly yields a bunch of derivatives:
There's an easier but more abstract way to obtain the derivatives above that may be overkill but is worth mentioning anyway. The Jacobian of the functions and is:
Note that the Jacobian is a compact representation of the coefficients of the total derivative; using as an example (bold indicating vectors):
So, it follows then that the derivatives that we're interested in may be obtained by inverting the Jacobian matrix:
Though somewhat obscure, this is very convenient and it's just one of the many utilities of the Jacobian matrix. An interesting bit of insight is gained: coordinate changes are senseless unless the Jacobian is invertible everywhere except at isolated points, stated another way the determinant of the Jacobian matrix must be nonzero, otherwise the coordinate change is not one-to-one (note that the determinant will be zero at in this example. An isolated point such as this is not problematic.).
Either path you take, there should now be enough information to evaluate the Cartesian second derivatives. Working on :
Proceeding similarly for :
Now, add these tirelessly hand crafted differential operators and watch the result collapse into just 3 nontrigonometric terms:
That was a lot of work. To save trouble, here is the Laplacian in other two other popular coordinate systems:
Derivatives have been combined wherever possible (not done previously).
Concluding RemarksEdit
This was a long, involved chapter. It should be clear that the solutions derived work only for very simple geometries, other geometries may be worked with by grace of conformal mappings.
The Laplacian (and variations of it) is a very important quantity and its behaviour is worth knowing like the back of your hand. A sampling of important equations that involve the Laplacian:
• The Navier Stokes equations.
• The diffusion equation.
• Laplace's equation.
• Poisson's equation.
• The Helmholtz equation.
• The Schrödinger equation.
• The wave equation.
There's a couple of other operators that are similar to (though less important than) the Laplacian, which deserve mention:
• Biharmonic operator, in three Cartesian dimensions:
The biharmonic equation is useful in linear elastic theory, for example it can describe "creeping" fluid flow:
• d'Alembertian:
The wave equation may be expressed using the d'Alembertian:
Though expressed with the Laplacian is more popular: |
b9bfb84bcb7ee990 | History of mathematical notation
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The history of mathematical notation[1] includes the commencement, progress, and cultural diffusion of mathematical symbols and the conflict of the methods of notation confronted in a notation's move to popularity or inconspicuousness. Mathematical notation[2] comprises the symbols used to write mathematical equations and formulas. Notation generally implies a set of well-defined representations of quantities and symbols operators.[3] The history includes Hindu–Arabic numerals, letters from the Roman, Greek, Hebrew, and German alphabets, and a host of symbols invented by mathematicians over the past several centuries.
The development of mathematical notation can be divided in stages.[4][5] The "rhetorical" stage is where calculations are performed by words and no symbols are used.[6] The "syncopated" stage is where frequently used operations and quantities are represented by symbolic syntactical abbreviations. From ancient times through the post-classical age,[note 1] bursts of mathematical creativity were often followed by centuries of stagnation. As the early modern age opened and the worldwide spread of knowledge began, written examples of mathematical developments came to light. The "symbolic" stage is where comprehensive systems of notation supersede rhetoric. Beginning in Italy in the 16th century, new mathematical developments, interacting with new scientific discoveries, were made at an increasing pace that continues through the present day. This symbolic system was in use by medieval Indian mathematicians and in Europe since the middle of the 17th century,[7] and has continued to develop in the contemporary era.
The area of study known as the history of mathematics is primarily an investigation into the origin of discoveries in mathematics and, the focus here, the investigation into the mathematical methods and notation of the past.
Rhetorical stage[edit]
See also: Measurement
Although the history commences with that of the Ionian schools, there is no doubt that those Ancient Greeks who paid attention to it were largely indebted to the previous investigations of the Ancient Egyptians and Ancient Phoenicians. Numerical notation distinctive feature, i.e. symbols having local as well as intrinsic values (arithmetic), implies a state of civilization at the period of its invention. Our knowledge of the mathematical attainments of these early peoples, to which this section is devoted, is imperfect and the following brief notes be regarded as a summary of the conclusions which seem most probable, and the history of mathematics begins with the symbolic sections.
Many areas of mathematics began with the study of real world problems, before the underlying rules and concepts were identified and defined as abstract structures. For example, geometry has its origins in the calculation of distances and areas in the real world; algebra started with methods of solving problems in arithmetic.
There can be no doubt that most early peoples which have left records knew something of numeration and mechanics, and that a few were also acquainted with the elements of land-surveying. In particular, the Egyptians paid attention to geometry and numbers, and the Phoenicians to practical arithmetic, book-keeping, navigation, and land-surveying. The results attained by these people seem to have been accessible, under certain conditions, to travelers. It is probable that the knowledge of the Egyptians and Phoenicians was largely the result of observation and measurement, and represented the accumulated experience of many ages.
Beginning of notation[edit]
Written mathematics began with numbers expressed as tally marks, with each tally representing a single unit. The numerical symbols consisted probably of strokes or notches cut in wood or stone, and intelligible alike to all nations.[note 2] For example, one notch in a bone represented one animal, or person, or anything else. The peoples with whom the Greeks of Asia Minor (amongst whom notation in western history begins) were likely to have come into frequent contact were those inhabiting the eastern littoral of the Mediterranean: and Greek tradition uniformly assigned the special development of geometry to the Egyptians, and that of the science of numbers[note 3] either to the Egyptians or to the Phoenicians.
The Ancient Egyptians had a symbolic notation which was the numeration by Hieroglyphics.[8][9] The Egyptian mathematics had a symbol for one, ten, one-hundred, one-thousand, ten-thousand, one-hundred-thousand, and one-million. Smaller digits were placed on the left of the number, as they are in Hindu–Arabic numerals. Later, the Egyptians used hieratic instead of hieroglyphic script to show numbers. Hieratic was more like cursive and replaced several groups of symbols with individual ones. For example, the four vertical lines used to represent four were replaced by a single horizontal line. This is found in the Rhind Mathematical Papyrus (c. 2000–1800 BC) and the Moscow Mathematical Papyrus (c. 1890 BC). The system the Egyptians used was discovered and modified by many other civilizations in the Mediterranean. The Egyptians also had symbols for basic operations: legs going forward represented addition, and legs walking backward to represent subtraction.
The Mesopotamians had symbols for each power of ten.[10] Later, they wrote their numbers in almost exactly the same way done in modern times. Instead of having symbols for each power of ten, they would just put the coefficient of that number. Each digit was at separated by only a space, but by the time of Alexander the Great, they had created a symbol that represented zero and was a placeholder. The Mesopotamians also used a sexagesimal system, that is base sixty. It is this system that is used in modern times when measuring time and angles. Babylonian mathematics is derived from more than 400 clay tablets unearthed since the 1850s.[11] Written in Cuneiform script, tablets were inscribed whilst the clay was moist, and baked hard in an oven or by the heat of the sun. Some of these appear to be graded homework. The earliest evidence of written mathematics dates back to the ancient Sumerians and the system of metrology from 3000 BC. From around 2500 BC onwards, the Sumerians wrote multiplication tables on clay tablets and dealt with geometrical exercises and division problems. The earliest traces of the Babylonian numerals also date back to this period.[12]
The majority of Mesopotamian clay tablets date from 1800 to 1600 BC, and cover topics which include fractions, algebra, quadratic and cubic equations, and the calculation of regular reciprocal pairs.[13] The tablets also include multiplication tables and methods for solving linear and quadratic equations. The Babylonian tablet YBC 7289 gives an approximation of √2 accurate to five decimal places. Babylonian mathematics were written using a sexagesimal (base-60) numeral system. From this derives the modern day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 (60 x 6) degrees in a circle, as well as the use of minutes and seconds of arc to denote fractions of a degree. Babylonian advances in mathematics were facilitated by the fact that 60 has many divisors: the reciprocal of any integer which is a multiple of divisors of 60 has a finite expansion in base 60. (In decimal arithmetic, only reciprocals of multiples of 2 and 5 have finite decimal expansions.) Also, unlike the Egyptians, Greeks, and Romans, the Babylonians had a true place-value system, where digits written in the left column represented larger values, much as in the decimal system. They lacked, however, an equivalent of the decimal point, and so the place value of a symbol often had to be inferred from the context.
Syncopated stage[edit]
Archimedes Thoughtful
by Fetti (1620)
The last words attributed to Archimedes are "Do not disturb my circles",[note 4] a reference to the circles in the mathematical drawing that he was studying when disturbed by the Roman soldier.
The history of mathematics cannot with certainty be traced back to any school or period before that of the Ionian Greeks, but the subsequent history may be divided into periods, the distinctions between which are tolerably well marked. Greek mathematics, which originated with the study of geometry, tended from its commencement to be deductive and scientific. Since the fourth century AD, Pythagoras has commonly been given credit for discovering the Pythagorean theorem, a theorem in geometry that states that in a right-angled triangle the area of the square on the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares of the other two sides.[note 5] The ancient mathematical texts are available with the prior mentioned Ancient Egyptians notation and with Plimpton 322 (Babylonian mathematics c. 1900 BC). The study of mathematics as a subject in its own right begins in the 6th century BC with the Pythagoreans, who coined the term "mathematics" from the ancient Greek μάθημα (mathema), meaning "subject of instruction".[14]
Plato's influence has been especially strong in mathematics and the sciences. He helped to distinguish between pure and applied mathematics by widening the gap between "arithmetic", now called number theory and "logistic", now called arithmetic. Greek mathematics greatly refined the methods (especially through the introduction of deductive reasoning and mathematical rigor in proofs) and expanded the subject matter of mathematics.[15] Aristotle is credited with what later would be called the law of excluded middle.
Abstract Mathematics[16] is what treats of magnitude[note 6] or quantity, absolutely and generally conferred, without regard to any species of particular magnitude, such as Arithmetic and Geometry, In this sense, abstract mathematics is opposed to mixed mathematics; wherein simple and abstract properties, and the relations of quantities primitively considered in mathematics, are applied to sensible objects, and by that means become intermixed with physical considerations; Such are Hydrostatics, Optics, Navigation, &c.[16]
Archimedes is generally considered to be the greatest mathematician of antiquity and one of the greatest of all time.[17][18] He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of pi.[19] He also defined the spiral bearing his name, formulae for the volumes of surfaces of revolution and an ingenious system for expressing very large numbers.
Euclid's Elements
The prop. 31, 32 and 33 of the book of Euclid XI, which is located in vol. 2 of the manuscript, the sheets 207 to - 208 recto.
In the historical development of geometry, the steps in the abstraction of geometry were made by the ancient Greeks. Euclid's Elements being the earliest extant documentation of the axioms of plane geometry— though Proclus tells of an earlier axiomatisation by Hippocrates of Chios.[20] Euclid's Elements (c. 300 BC) is one of the oldest extant Greek mathematical treatises[note 7] and consisted of 13 books written in Alexandria; collecting theorems proven by other mathematicians, supplemented by some original work.[note 8] The document is a successful collection of definitions, postulates (axioms), propositions (theorems and constructions), and mathematical proofs of the propositions. Euclid's first theorem is a lemma that possesses properties of prime numbers. The influential thirteen books cover Euclidean geometry, geometric algebra, and the ancient Greek version of algebraic systems and elementary number theory. It was ubiquitous in the Quadrivium and is instrumental in the development of logic, mathematics, and science.
Diophantus of Alexandria was author of a series of books called Arithmetica, many of which are now lost. These texts deal with solving algebraic equations. Boethius provided a place for mathematics in the curriculum in the 6th century when he coined the term quadrivium to describe the study of arithmetic, geometry, astronomy, and music. He wrote De institutione arithmetica, a free translation from the Greek of Nicomachus's Introduction to Arithmetic; De institutione musica, also derived from Greek sources; and a series of excerpts from Euclid's Elements. His works were theoretical, rather than practical, and were the basis of mathematical study until the recovery of Greek and Arabic mathematical works.[21][22]
Acrophonic and Milesian numeration[edit]
The Greeks employed Attic numeration,[23] which was based on the system of the Egyptians and was later adapted and used by the Romans. Greek numerals one through four were vertical lines, as in the hieroglyphics. The symbol for five was the Greek letter Π (pi), which is the letter of the Greek word for five, pente. Numbers six through nine were pente with vertical lines next to it. Ten was represented by the letter (Δ) of the word for ten, deka, one hundred by the letter from the word for hundred, etc.
The Ionian numeration used their entire alphabet including three archaic letters. The numeral notation of the Greeks, though far less convenient than that now in use, was formed on a perfectly regular and scientific plan,[24] and could be used with tolerable effect as an instrument of calculation, to which purpose the Roman system was totally inapplicable. The Greeks divided the twenty-four letters of their alphabet into three classes, and, by adding another symbol to each class, they had characters to represent the units, tens, and hundreds. (Jean Baptiste Joseph Delambre's Astronomie Ancienne, t. ii.)
Α (α) Β (β) Г (γ) Δ (δ) Ε (ε) Ϝ (ϝ) Z (ζ) H (η) θ (θ) I (ι) K (κ) Λ (λ) Μ (μ) Ν (ν) Ξ (ξ) Ο (ο) Π (π) Ϟ (ϟ) Ρ (ρ) Σ (σ) Τ (τ) Υ (υ) Ф (φ) Χ (χ) Ψ (ψ) Ω (ω) Ϡ (ϡ)
1 2 3 4 5 6 7 8 9 10 20 30 40 50 60 70 80 90 100 200 300 400 500 600 700 800 900
This system appeared in the third century BC, before the letters digamma (Ϝ), koppa (Ϟ), and sampi (Ϡ) became obsolete. When lowercase letters became differentiated from upper case letters, the lower case letters were used as the symbols for notation. Multiples of one thousand were written as the nine numbers with a stroke in front of them: thus one thousand was ",α", two-thousand was ",β", etc. M (for μὐριοι, as in "myriad") was used to multiply numbers by ten thousand. For example, the number 88,888,888 would be written as M,ηωπη*ηωπη[25]
Greek mathematical reasoning was almost entirely geometric (albeit often used to reason about non-geometric subjects such as number theory), and hence the Greeks had no interest in algebraic symbols. The great exception was Diophantus of Alexandria, the great algebraist.[26] His Arithmetica was one of the texts to use symbols in equations. It was not completely symbolic, but was much more so than previous books. An unknown number was called s.[27] The square of s was ; the cube was ; the fourth power was ; and the fifth power was .[28][note 9]
Chinese mathematical notation[edit]
Main article: Suzhou numerals
The numbers 0–9 in Chinese huāmǎ (花碼) numerals
The Chinese used numerals that look much like the tally system.[29] Numbers one through four were horizontal lines. Five was an X between two horizontal lines; it looked almost exactly the same as the Roman numeral for ten. Nowadays, the huāmǎ system is only used for displaying prices in Chinese markets or on traditional handwritten invoices.
In the history of the Chinese, there were those who were familiar with the sciences of arithmetic, geometry, mechanics, optics, navigation, and astronomy. Mathematics in China emerged independently by the 11th century BC.[30] It is indeed almost certain that the Chinese were acquainted with several geometrical or rather architectural implements;[note 10] with mechanical machines;[note 11] that they knew of the characteristic property of the magnetic needle; and were aware that astronomical events occurred in cycles. Chinese of that time had made attempts to classify or extend the rules of arithmetic or geometry which they knew, and to explain the causes of the phenomena with which they were acquainted beforehand. The Chinese independently developed very large and negative numbers, decimals, a place value decimal system, a binary system, algebra, geometry, and trigonometry.
Counting rod numerals
Chinese mathematics made early contributions, including a place value system.[31][32] The geometrical theorem known to the ancient Chinese were acquainted was applicable in certain cases (namely the ratio of sides).[note 12] It is that geometrical theorems which can be demonstrated in the quasi-experimental way of superposition were also known to them. In arithmetic their knowledge seems to have been confined to the art of calculation by means of the swan-pan, and the power of expressing the results in writing. Our knowledge of the early attainments of the Chinese, slight though it is, is more complete than in the case of most of their contemporaries. It is thus instructive, and serves to illustrate the fact, that it can be known a nation may possess considerable skill in the applied arts with but our knowledge of the later mathematics on which those arts are founded can be scarce. Knowledge of Chinese mathematics before 254 BC is somewhat fragmentary, and even after this date the manuscript traditions are obscure. Dates centuries before the classical period are generally considered conjectural by Chinese scholars unless accompanied by verified archaeological evidence.
As in other early societies the focus was on astronomy in order to perfect the agricultural calendar, and other practical tasks, and not on establishing formal systems.The Chinese Board of Mathematics duties were confined to the annual preparation of an almanac, the dates and predictions in which it regulated. Ancient Chinese mathematicians did not develop an axiomatic approach, but made advances in algorithm development and algebra. The achievement of Chinese algebra reached its zenith in the 13th century, when Zhu Shijie invented method of four unknowns.
Modern artist's impression of Shen Kuo.
The state of trigonometry in China slowly began to change and advance during the Song Dynasty (960–1279), where Chinese mathematicians began to express greater emphasis for the need of spherical trigonometry in calendarical science and astronomical calculations.[34] The polymath Chinese scientist, mathematician and official Shen Kuo (1031–1095) used trigonometric functions to solve mathematical problems of chords and arcs.[34] Sal Restivo writes that Shen's work in the lengths of arcs of circles provided the basis for spherical trigonometry developed in the 13th century by the mathematician and astronomer Guo Shoujing (1231–1316).[35] As the historians L. Gauchet and Joseph Needham state, Guo Shoujing used spherical trigonometry in his calculations to improve the calendar system and Chinese astronomy.[36][37] The mathematical science of the Chinese would incorporate the work and teaching of Arab missionaries with knowledge of spherical trigonometry who had come to China in the course of the thirteenth century.
Indian mathematical notation[edit]
Although the origin of our present system of numerical notation is ancient, there is no doubt that it was in use among the Hindus over two thousand years ago. The algebraic notation of the Indian mathematician, Brahmagupta, was syncopated. Addition was indicated by placing the numbers side by side, subtraction by placing a dot over the subtrahend (the number to be subtracted), and division by placing the divisor below the dividend, similar to our notation but without the bar. Multiplication, evolution, and unknown quantities were represented by abbreviations of appropriate terms.[38] The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, likely evolved over the course of the first millennium AD in India and was transmitted to the west via Islamic mathematics.[39][40]
Hindu–Arabic numerals and notations[edit]
A page from al-Khwārizmī's Algebra
Despite their name, Arabic numerals actually started in India. The reason for this misnomer is Europeans saw the numerals used in an Arabic book, Concerning the Hindu Art of Reckoning, by Mohommed ibn-Musa al-Khwarizmi. Al-Khwārizmī wrote several important books on the Hindu–Arabic numerals and on methods for solving equations. His book On the Calculation with Hindu Numerals, written about 825, along with the work of Al-Kindi,[note 13] were instrumental in spreading Indian mathematics and Indian numerals to the West. Al-Khwarizmi did not claim the numerals as Arabic, but over several Latin translations, the fact that the numerals were Indian in origin was lost. The word algorithm is derived from the Latinization of Al-Khwārizmī's name, Algoritmi, and the word algebra from the title of one of his works, Al-Kitāb al-mukhtaṣar fī hīsāb al-ğabr wa’l-muqābala (The Compendious Book on Calculation by Completion and Balancing).
Islamic mathematics developed and expanded the mathematics known to Central Asian civilizations.[41] Al-Khwārizmī gave an exhaustive explanation for the algebraic solution of quadratic equations with positive roots,[42] and Al-Khwārizmī was to teach algebra in an elementary form and for its own sake.[43] Al-Khwārizmī also discussed the fundamental method of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. This is the operation which al-Khwārizmī originally described as al-jabr.[44] His algebra was also no longer concerned "with a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study." Al-Khwārizmī also studied an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems."[45]
Al-Karaji, in his treatise al-Fakhri, extends the methodology to incorporate integer powers and integer roots of unknown quantities.[note 14][46] The historian of mathematics, F. Woepcke,[47] praised Al-Karaji for being "the first who introduced the theory of algebraic calculus." Also in the 10th century, Abul Wafa translated the works of Diophantus into Arabic. Ibn al-Haytham would develop analytic geometry. Al-Haytham derived the formula for the sum of the fourth powers, using a method that is readily generalizable for determining the general formula for the sum of any integral powers. Al-Haytham performed an integration in order to find the volume of a paraboloid, and was able to generalize his result for the integrals of polynomials up to the fourth degree.[note 15][48] In the late 11th century, Omar Khayyam would develop algebraic geometry, wrote Discussions of the Difficulties in Euclid,[note 16] and wrote on the general geometric solution to cubic equations. Nasir al-Din Tusi (Nasireddin) made advances in spherical trigonometry. Muslim mathematicians during this period include the addition of the decimal point notation to the Arabic numerals.
Many Greek and Arabic texts on mathematics were then translated into Latin, which led to further development of mathematics in medieval Europe. In the 12th century, scholars traveled to Spain and Sicily seeking scientific Arabic texts, including al-Khwārizmī's[note 17] and the complete text of Euclid's Elements.[note 18][49][50] One of the European books that advocated using the numerals was Liber Abaci, by Leonardo of Pisa, better known as Fibonacci. Liber Abaci is better known for the mathematical problem Fibonacci wrote in it about a population of rabbits. The growth of the population ended up being a Fibonacci sequence, where a term is the sum of the two preceding terms.
Abū al-Hasan ibn Alī al-Qalasādī (1412–1482) was the last major medieval Arab algebraist, who improved on the algebraic notation earlier used by Ibn al-Yāsamīn in the 12th century[citation needed] and, in the Maghreb, by Ibn al-Banna in the 13th century.[51] In contrast to the syncopated notations of their predecessors, Diophantus and Brahmagupta, which lacked symbols for mathematical operations,[52] al-Qalasadi's algebraic notation was the first to have symbols for these functions and was thus "the first steps toward the introduction of algebraic symbolism." He represented mathematical symbols using characters from the Arabic alphabet.[51]
Symbolic stage[edit]
Symbols by popular introduction date
integral quabla end of proof Function (mathematics) Complex number Empty set Arrow (symbol) universal quantifier Rational number Integer Line integral Matrix notation Matrix notation logical disjunction dot product cross product Existential quantification Natural number curly brackets Element (mathematics) Aleph number set inclusion Intersection (set theory) Union (set theory) nabla symbol Matrix notation Determinant Absolute value set inclusion Product sign factorial integral part identity sign prime symbol partial differential Proportionality (mathematics) summation inequality signs Division (mathematics) middle dot Colon (punctuation) integral sign differential sign Inequality signs division sign infinity sign percent sign radical symbol Subscript and superscript Inequality (mathematics) radical symbol Proportionality (mathematics) plus-minus sign multiplication sign equals sign Parentheses nth root Plus and minus signs Plus and minus signs Mathematical notation
Early arithmetic and multiplication[edit]
The 1489 use of the plus and minus signs in print.
The 14th century saw the development of new mathematical concepts to investigate a wide range of problems.[53] The two widely used arithmetic symbols are addition and subtraction, + and −. The plus sign was used by 1360 by Nicole Oresme[54][note 19] in his work Algorismus proportionum.[55] It is thought an abbreviation for "et", meaning "and" in Latin, in much the same way the ampersand sign also began as "et". Oresme at the University of Paris and the Italian Giovanni di Casali independently provided graphical demonstrations of the distance covered by a body undergoing uniformly accelerated motion, asserting that the area under the line depicting the constant acceleration and represented the total distance traveled.[56] The minus sign was used in 1489 by Johannes Widmann in Mercantile Arithmetic or Behende und hüpsche Rechenung auff allen Kauffmanschafft,.[57] Widmann used the minus symbol with the plus symbol, to indicate deficit and surplus, respectively.[58] In Summa de arithmetica, geometria, proportioni e proportionalità,[note 20][59] Luca Pacioli used symbols for plus and minus symbols and contained algebra.[note 21]
In the 15th century, Ghiyath al-Kashi computed the value of π to the 16th decimal place. Kashi also had an algorithm for calculating nth roots.[note 22] In 1533, Regiomontanus's table of sines and cosines were published.[60] Scipione del Ferro and Niccolò Fontana Tartaglia discovered solutions for cubic equations. Gerolamo Cardano published them in his 1545 book Ars Magna, together with a solution for the quartic equations, discovered by his student Lodovico Ferrari. The radical symbol[note 23] for square root was introduced by Christoph Rudolff.[note 24] Michael Stifel's important work Arithmetica integra[61] contained important innovations in mathematical notation. In 1556, Niccolò Tartaglia used parentheses for precedence grouping. In 1557 Robert Recorde published The Whetstone of Witte which used the equal sign (=) as well as plus and minus signs for the English reader. In 1564, Gerolamo Cardano analyzed games of chance beginning the early stages of probability theory. In 1572 Rafael Bombelli published his L'Algebra in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations. Simon Stevin's book De Thiende ('the art of tenths'), published in Dutch in 1585, contained a systematic treatment of decimal notation, which influenced all later work on the real number system. The New algebra (1591) of François Viète introduced the modern notational manipulation of algebraic expressions. For navigation and accurate maps of large areas, trigonometry grew to be a major branch of mathematics. Bartholomaeus Pitiscus coin the word "trigonometry", publishing his Trigonometria in 1595.
John Napier is best known as the inventor of logarithms[note 25][62] and made common the use of the decimal point in arithmetic and mathematics.[63][64] After Napier, Edmund Gunter created the logarithmic scales (lines, or rules) upon which slide rules are based, it was William Oughtred who used two such scales sliding by one another to perform direct multiplication and division; and he is credited as the inventor of the slide rule in 1622. In 1631 Oughtred introduced the multiplication sign (×) his proportionality sign,[note 26] and abbreviations sin and cos for the sine and cosine functions.[65] Albert Girard also used the abbreviations 'sin', 'cos' and 'tan' for the trigonometric functions in his treatise.
Johannes Kepler was one of the pioneers of the mathematical applications of infinitesimals.[note 27] René Descartes is credited as the father of analytical geometry, the bridge between algebra and geometry,[note 28] crucial to the discovery of infinitesimal calculus and analysis. In the 17th century, Descartes introduced Cartesian co-ordinates which allowed the development of analytic geometry.[note 29] Blaise Pascal influenced mathematics throughout his life. His Traité du triangle arithmétique ("Treatise on the Arithmetical Triangle") of 1653 described a convenient tabular presentation for binomial coefficients.[note 30] Pierre de Fermat and Blaise Pascal would investigate probability.[note 31] John Wallis introduced the infinity symbol.[note 32] He similarly used this notation for infinitesimals.[note 33] In 1657, Christiaan Huygens published the treatise on probability, On Reasoning in Games of Chance.[note 34][66]
Johann Rahn introduced the division symbol (obelus) and the therefore sign in 1659. William Jones used π in Synopsis palmariorum mathesios[67] in 1706 because it is the letter of the Greek word perimetron (περιμετρον), which means perimeter in Greek. This usage was popularized in 1737 by Euler. In 1734, Pierre Bouguer used double horizontal bar below the inequality sign.[68]
Derivatives notation: Leibniz and Newton[edit]
Derivative notations
The study of linear algebra emerged from the study of determinants, which were used to solve systems of linear equations. Calculus had two main systems of notation, each created by one of the creators: that developed by Isaac Newton and the notation developed by Gottfried Leibniz. Leibniz's is the notation used most often today. Newton's was simply a dot or dash placed above the function.[note 35] In modern usage, this notation generally denotes derivatives of physical quantities with respect to time, and is used frequently in the science of mechanics. Leibniz, on the other hand, used the letter d as a prefix to indicate differentiation, and introduced the notation representing derivatives as if they were a special type of fraction.[note 36] This notation makes explicit the variable with respect to which the derivative of the function is taken. Leibniz also created the integral symbol.[note 37] The symbol is an elongated S, representing the Latin word Summa, meaning "sum". When finding areas under curves, integration is often illustrated by dividing the area into infinitely many tall, thin rectangles, whose areas are added. Thus, the integral symbol is an elongated s, for sum.
High division operators and functions[edit]
See also: modern age
Letters of the alphabet in this time were to be used as symbols of quantity; and although much diversity existed with respect to the choice of letters, there were to be several universally recognized rules in the following history.[24] Here thus in the history of equations the first letters of the alphabet were indicatively known as coefficients, the last letters the unknown terms (an incerti ordinis). In algebraic geometry, again, a similar rule was to be observed, the last letters of the alphabet there denoting the variable or current coordinates. Certain letters, such as , , etc., were by universal consent appropriated as symbols of the frequently occurring numbers 3.14159 ..., and 2.7182818 ....,[note 38] etc., and their use in any other acceptation was to be avoided as much as possible.[24] Letters, too, were to be employed as symbols of operation, and with them other previously menition arbitrary operation characters. The letters , elongated were to be appropriated as operative symbols in the differential calculus and integral calculus, and ∑ in the calculus of differences.[24] In functional notation, a letter, as a symbol of operation, is combined with another which is regarded as a symbol of quantity.[24][note 39]
Beginning in 1718, Thomas Twinin used the division slash (solidus), deriving it from the earlier Arabic horizontal fraction bar. Pierre-Simon, marquis de Laplace developed the widely used Laplacian differential operator.[note 40] In 1750, Gabriel Cramer developed "Cramer's Rule" for solving linear systems.
Euler and prime notations[edit]
Leonhard Euler's signature
Leonhard Euler was one of the most prolific mathematicians in history, and also a prolific inventor of canonical notation. His contributions include his use of e to represent the base of natural logarithms. It is not known exactly why was chosen, but it was probably because the four letters of the alphabet were already commonly used to represent variables and other constants. Euler used to represent pi consistently. The use of was suggested by William Jones, who used it as shorthand for perimeter. Euler used to represent the square root of negative one,[note 41] although he earlier used it as an infinite number. [note 42][note 43] For summation, Euler used sigma, Σ.[note 44] For functions, Euler used the notation to represent a function of . In 1730, Euler wrote the gamma function.[note 45] In 1736, Euler produces his paper on the Seven Bridges of Königsberg[69] regarding topology.
The mathematician, William Emerson[70] would develop the proportionality sign.[note 46][note 47][71][72] Much later in the abstract expressions of the value of various proportional phenomena, the parts-per notation would become useful as a set of pseudo units to describe small values of miscellaneous dimensionless quantities. Marquis de Condorcet, in 1768, advanced the partial differential sign.[note 48] In 1771, Alexandre-Théophile Vandermonde deduced the importance of topological features when discussing the properties of knots related to the geometry of position. Between 1772 and 1788, Joseph-Louis Lagrange re-formulated the formulas and calculations of Classical "Newtonian" mechanics, called Lagrangian mechanics. The prime symbol for derivatives was also made by Lagrange.
Gauss, Hamilton, and Matrix notations[edit]
At the turn of the 19th century, Carl Friedrich Gauss developed the identity sign for congruence relation and, in Quadratic reciprocity, the integral part. Gauss contributed functions of complex variables, in geometry, and on the convergence of series. He gave the satisfactory proofs of the fundamental theorem of algebra and of the quadratic reciprocity law. Gauss developed the theory of solving linear systems by using Gaussian elimination, which was initially listed as an advancement in geodesy.[73] He would also develop the product sign. Also in this time, Niels Henrik Abel and Évariste Galois[note 50] conducted their work on the solvability of equations, linking group theory and field theory.
After the 1800s, Christian Kramp would promote factorial notation during his research in generalized factorial function which applied to non-integers.[74] Joseph Diaz Gergonne introduced the set inclusion signs.[note 51] Peter Gustav Lejeune Dirichlet developed Dirichlet L-functions to give the proof of Dirichlet's theorem on arithmetic progressions and began analytic number theory.[note 52] In 1828, Gauss proved his Theorema Egregium (remarkable theorem in Latin), establishing property of surfaces. In the 1830s, George Green developed Green's function. In 1829. Carl Gustav Jacob Jacobi publishes Fundamenta nova theoriae functionum ellipticarum with his elliptic theta functions. By 1841, Karl Weierstrass, the "father of modern analysis", elaborated on the concept of absolute value and the determinant of a matrix.
Matrix notation would be more fully developed by Arthur Cayley in his three papers, on subjects which had been suggested by reading the Mécanique analytique[75] of Lagrange and some of the works of Laplace. Cayley defined matrix multiplication and matrix inverses. Cayley used a single letter to denote a matrix,[76] thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants,[77] and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants".[78]
William Rowan Hamilton would introduce the nabla symbol[note 54] for vector differentials.[79][80] This was previously used by Hamilton as a general-purpose operator sign.[81] Hamilton reformulated Newtonian mechanics, now called Hamiltonian mechanics. This work has proven central to the modern study of classical field theories such as electromagnetism. This was also important to the development of quantum mechanics.[note 55] In mathematics, he is perhaps best known as the inventor of quaternion notation[note 56] and biquaternions. Hamilton also introduced the word "tensor" in 1846.[82][note 57] James Cockle would develop the tessarines[note 58] and, in 1849, coquaternions. In 1848, James Joseph Sylvester introduced into matrix algebra the term matrix.[note 59]
Maxwell, Clifford, and Ricci notations[edit]
James Clerk Maxwell
Maxwell's most prominent achievement was to formulate a set of equations that united previously unrelated observations, experiments, and equations of electricity, magnetism, and optics into a consistent theory.[83]
In 1864 James Clerk Maxwell reduced all of the then current knowledge of electromagnetism into a linked set of differential equations with 20 equations in 20 variables, contained in "A Dynamical Theory of the Electromagnetic Field".[84] (See Maxwell's equations.) The method of calculation which it is necessary to employ was given by Lagrange, and afterwards developed, with some modifications, by Hamilton's equations. It is usually referred to as Hamilton's principle; when the equations in the original form are used they are known as Lagrange's equations. In 1871, he presented the Remarks on the mathematical classification of physical quantities.[85] Also in 1871, Richard Dedekind called a set of real or complex numbers which is closed under the four arithmetic operations a "field".
In 1878, William Kingdon Clifford publishes his Elements of Dynamic.[86] Clifford would develop split-biquaternions,[note 60] which he called algebraic motors. Clifford eliminated quaternion study by separating the dot product and cross product of two vectors from the complete quaternion notation.[note 61] This approach made vector calculus available to engineers and others working in three dimensions and skeptical of the lead–lag effect[note 62] in the fourth dimension.[note 63] Between 1880 and 1887, Oliver Heaviside developed the operational calculus[87] (involving the D notation for the differential operator, which he is credited with creating), a method of solving differential equations by transforming them into ordinary algebraic equations which caused a great deal of controversy when introduced, owing to the lack of rigour in his derivation of it.[note 64] The common vector notation are used when working with vectors, which are spatial or more abstract members of vector spaces. The angle notation (or phasor notation) is a notation used in electronics.
In 1881, Leopold Kronecker defined what he called a "domain of rationality", which is a field extension of the field of rational numbers in modern terms.[88] In 1882, Hüseyin Tevfik Paşa (tr) wrote the book titled "Linear Algebra".[89][90] Lord Kelvin's aetheric atom theory (1860s) led Peter Guthrie Tait, in 1885, to publish a topological table of knots with up to ten crossings known as the Tait conjectures. In 1893, Heinrich M. Weber gave the clear definition of an abstract field.[note 65] Tensor calculus was developed by Gregorio Ricci-Curbastro between 1887–96, presented in 1892 under the title absolute differential calculus,[91] and the contemporary usage of "tensor" was stated by Woldemar Voigt in 1898.[92] In 1895, Henri Poincaré published Analysis Situs.[93] In 1897, Charles Proteus Steinmetz would publish Theory and Calculation of Alternating Current Phenomena, with the assistance of Ernst J. Berg.[94]
From formula mathematics to tensors[edit]
In 1895 Giuseppe Peano issued his Formulario mathematico,[95] an effort to digest mathematics into terse text based on special symbols. He would provide a definition of a vector space and linear map. He would also introduce the intersection sign, the union sign, the membership sign (is an element of), and existential quantifier[note 67] (there exists). Peano would pass to Bertrand Russell his work in 1900 at a Paris conference; it so impressed Russell that Russell too was taken with the drive to render mathematics more concisely. The result was Principia Mathematica written with Alfred North Whitehead. This treatise marks a watershed in modern literature where symbol became dominant.[note 68] Ricci-Curbastro and Tullio Levi-Civita popularized the tensor index notation around 1900.[96]
Mathematical logic and abstraction[edit]
At the beginning of this period, Felix Klein's "Erlangen program" identified the underlying theme of various geometries, defining each of them as the study of properties invariant under a given group of symmetries. This level of abstraction revealed connections between geometry and abstract algebra. Georg Cantor[note 69] would introduce the aleph symbol for cardinal numbers of transfinite sets.[note 70] His notation for the cardinal numbers was the Hebrew letter (aleph) with a natural number subscript; for the ordinals he employed the Greek letter ω (omega). This notation is still in use today in ordinal notation of a finite sequence of symbols from a finite alphabet which names an ordinal number according to some scheme which gives meaning to the language. His theory created a great deal of controversy. Cantor would, in his study of Fourier series, consider point sets in Euclidean space.
After the turn of the 20th century, Josiah Willard Gibbs would in physical chemistry introduce middle dot for dot product and the multiplication sign for cross products. He would also supply notation for the scalar and vector products, which was introduced in Vector Analysis. In 1904, Ernst Zermelo promotes axiom of choice and his proof of the well-ordering theorem.[97] Bertrand Russell would shortly afterward introduce logical disjunction (OR) in 1906. Also in 1906, Poincaré would publish On the Dynamics of the Electron[98] and Maurice Fréchet introduced metric space.[99] Later, Gerhard Kowalewski and Cuthbert Edmund Cullis[100][101][102] would successively introduce matrices notation, parenthetical matrix and box matrix notation respectively. After 1907, mathematicians[note 71] studied knots from the point of view of the knot group and invariants from homology theory.[note 72] In 1908, Joseph Wedderburn's structure theorems were formulated for finite-dimensional algebras over a field. Also in 1908, Ernst Zermelo proposed "definite" property and the first axiomatic set theory, Zermelo set theory. In 1910 Ernst Steinitz published the influential paper Algebraic Theory of Fields.[note 73][note 74] In 1911, Steinmetz would publish Theory and Calculation of Transient Electric Phenomena and Oscillations.
Albert Einstein in 1921
Albert Einstein, in 1916, introduced the Einstein notation[note 75] which summed over a set of indexed terms in a formula, thus exerting notational brevity. Arnold Sommerfeld would create the contour integral sign in 1917. Also in 1917, Dimitry Mirimanoff proposes axiom of regularity. In 1919, Theodor Kaluza would solve general relativity equations using five dimensions, the results would have electromagnetic equations emerge.[103] This would be published in 1921 in "Zum Unitätsproblem der Physik".[104] In 1922, Abraham Fraenkel and Thoralf Skolem independently proposed replacing the axiom schema of specification with the axiom schema of replacement. Also in 1922, Zermelo–Fraenkel set theory was developed. In 1923, Steinmetz would publish Four Lectures on Relativity and Space. Around 1924, Jan Arnoldus Schouten would develop the modern notation and formalism for the Ricci calculus framework during the absolute differential calculus applications to general relativity and differential geometry in the early twentieth century.[note 76][105][106][107] In 1925, Enrico Fermi would describe a system comprising many identical particles that obey the Pauli exclusion principle, afterwards developing a diffusion equation (Fermi age equation). In 1926, Oskar Klein would develop the Kaluza–Klein theory. In 1928, Emil Artin abstracted ring theory with Artinian rings. In 1933, Andrey Kolmogorov introduces the Kolmogorov axioms. In 1937, Bruno de Finetti deduced the "operational subjective" concept.
Mathematical symbolism[edit]
Mathematical abstraction began as a process of extracting the underlying essence of a mathematical concept,[108][109] removing any dependence on real world objects with which it might originally have been connected,[110] and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena. Two abstract areas of modern mathematics are category theory and model theory. Bertrand Russell,[111] said, "Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as little as the physicist means to say". Though, one can substituted mathematics for real world objects, and wander off through equation after equation, and can build a concept structure which has no relation to reality.[112]
Symbolic logic studies the purely formal properties of strings of symbols. The interest in this area springs from two sources. First, the notation used in symbolic logic can be seen as representing the words used in philosophical logic. Second, the rules for manipulating symbols found in symbolic logic can be implemented on a computing machine. Symbolic logic is usually divided into two subfields, propositional logic and predicate logic. Other logics of interest include temporal logic, modal logic and fuzzy logic. The area of symbolic logic called propositional logic, also called propositional calculus, studies the properties of sentences formed from constants[note 77] and logical operators. The corresponding logical operations are known, respectively, as conjunction, disjunction, material conditional, biconditional, and negation. These operators are denoted as keywords[note 78] and by symbolic notation.
Some of the introduced mathematical logic notation during this time included the set of symbols used in Boolean algebra. This was created by George Boole in 1854. Boole himself did not see logic as a branch of mathematics, but it has come to be encompassed anyway. Symbols found in Boolean algebra include (AND), (OR), and (not). With these symbols, and letters to represent different truth values, one can make logical statements such as , that is "(a is true OR a is not true) is true", meaning it is true that a is either true or not true (i.e. false). Boolean algebra has many practical uses as it is, but it also was the start of what would be a large set of symbols to be used in logic.[note 79] Predicate logic, originally called predicate calculus, expands on propositional logic by the introduction of variables[note 80] and by sentences containing variables, called predicates.[note 81] In addition, predicate logic allows quantifiers.[note 82] With these logic symbols and additional quantifiers from predicate logic,[note 83] valid proofs can be made that are irrationally artificial,[note 84] but syntactical.[note 85]
Gödel incompleteness notation[edit]
While proving his incompleteness theorems,[note 86] Kurt Gödel created an alternative to the symbols normally used in logic. He used Gödel numbers, which were numbers that represented operations with set numbers, and variables with the prime numbers greater than 10. With Gödel numbers, logic statements can be broken down into a number sequence. Gödel then took this one step farther, taking the n prime numbers and putting them to the power of the numbers in the sequence. These numbers were then multiplied together to get the final product, giving every logic statement its own number.[114][note 87]
Contemporary notation and topics[edit]
Early 20th-century notation[edit]
Abstraction of notation is an ongoing process and the historical development of many mathematical topics exhibits a progression from the concrete to the abstract. Various set notations would be developed for fundamental object sets. Around 1924, David Hilbert and Richard Courant published "Methods of mathematical physics. Partial differential equations".[115] In 1926, Oskar Klein and Walter Gordon proposed the Klein–Gordon equation to describe relativistic particles.[note 88] The first formulation of a quantum theory describing radiation and matter interaction is due to Paul Adrien Maurice Dirac, who, during 1920, was first able to compute the coefficient of spontaneous emission of an atom.[116] In 1928, the relativistic Dirac equation was formulated by Dirac to explain the behavior of the relativistically moving electron.[note 89] Dirac described the quantification of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, and Werner Heisenberg, and an elegant formulation of quantum electrodynamics due to Enrico Fermi,[117] physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles.
In 1931, Alexandru Proca developed the Proca equation (Euler–Lagrange equation)[note 90] for the vector meson theory of nuclear forces and the relativistic quantum field equations. John Archibald Wheeler in 1937 develops S-matrix. Studies by Felix Bloch with Arnold Nordsieck,[118] and Victor Weisskopf,[119] in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer.[120] At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics.
In the 1930s, the double-struck capital Z for integer number sets was created by Edmund Landau. Nicolas Bourbaki created the double-struck capital Q for rational number sets. In 1935, Gerhard Gentzen made universal quantifiers. In 1936, Tarski's undefinability theorem is stated by Alfred Tarski and proved.[note 91] In 1938, Gödel proposes the constructible universe in the paper "The Consistency of the Axiom of Choice and of the Generalized Continuum-Hypothesis". André Weil and Nicolas Bourbaki would develop the empty set sign in 1939. That same year, Nathan Jacobson would coin the double-struck capital C for complex number sets.
Around the 1930s, Voigt notation[note 92] would be developed for multilinear algebra as a way to represent a symmetric tensor by reducing its order. Schönflies notation[note 93] became one of two conventions used to describe point groups (the other being Hermann–Mauguin notation). Also in this time, van der Waerden notation[121][122] became popular for the usage of two-component spinors (Weyl spinors) in four spacetime dimensions. Arend Heyting would introduce Heyting algebra and Heyting arithmetic.
The arrow, e.g., →, was developed for function notation in 1936 by Øystein Ore to denote images of specific elements.[note 94][note 95] Later, in 1940, it took its present form, e.g., f: X → Y, through the work of Witold Hurewicz. Werner Heisenberg, in 1941, proposed the S-matrix theory of particle interactions.
Paul Dirac, pictured here, made fundamental contributions to the early development of both quantum mechanics and quantum electrodynamics.
Bra–ket notation (Dirac notation) is a standard notation for describing quantum states, composed of angle brackets and vertical bars. It can also be used to denote abstract vectors and linear functionals. It is so called because the inner product (or dot product on a complex vector space) of two states is denoted by a ⟨bra|ket⟩[note 96] consisting of a left part, ⟨φ|, and a right part, |ψ⟩. The notation was introduced in 1939 by Paul Dirac,[123] though the notation has precursors in Grassmann's use of the notation [φ|ψ] for his inner products nearly 100 years previously.[124]
Bra–ket notation is widespread in quantum mechanics: almost every phenomenon that is explained using quantum mechanics—including a large portion of modern physics—is usually explained with the help of bra–ket notation. The notation establishes an encoded abstract representation-independence, producing a versatile specific representation (e.g., x, or p, or eigenfunction base) without much ado, or excessive reliance on, the nature of the linear spaces involved. The overlap expression ⟨φ|ψ⟩ is typically interpreted as the probability amplitude for the state ψ to collapse into the state ϕ. The Feynman slash notation (Dirac slash notation[125]) was developed by Richard Feynman for the study of Dirac fields in quantum field theory.
In 1948, Valentine Bargmann and Eugene Wigner proposed the relativistic Bargmann–Wigner equations to describe free particles and the equations are in the form of multi-component spinor field wavefunctions. In 1950, William Vallance Douglas Hodge presented "The topological invariants of algebraic varieties" at the Proceedings of the International Congress of Mathematicians. Between 1954 and 1957, Eugenio Calabi worked on the Calabi conjecture for Kähler metrics and the development of Calabi–Yau manifolds. In 1957, Tullio Regge formulated the mathematical property of potential scattering in the Schrödinger equation.[note 97] Stanley Mandelstam, along with Regge, did the initial development of the Regge theory of strong interaction phenomenology. In 1958, Murray Gell-Mann and Richard Feynman, along with George Sudarshan and Robert Marshak, deduced the chiral structures of the weak interaction in physics. Geoffrey Chew, along with others, would promote matrix notation for the strong interaction, and the associated bootstrap principle, in 1960. In the 1960s, set-builder notation was developed for describing a set by stating the properties that its members must satisfy. Also in the 1960s, tensors are abstracted within category theory by means of the concept of monoidal category. Later, multi-index notation eliminates conventional notions used in multivariable calculus, partial differential equations, and the theory of distributions, by abstracting the concept of an integer index to an ordered tuple of indices.
Modern mathematical notation[edit]
In the modern mathematics of special relativity, electromagnetism and wave theory, the d'Alembert operator[note 98][note 99] is the Laplace operator of Minkowski space. The Levi-Civita symbol[note 100] is used in tensor calculus.
After the full Lorentz covariance formulations that were finite at any order in a perturbation series of quantum electrodynamics, Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with a Nobel prize in physics in 1965.[126] Their contributions, and those of Freeman Dyson, were about covariant and gauge invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Quantum electrodynamics has served as the model and template for subsequent quantum field theories. Peter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. In the late 1960s, the particle zoo was composed of the then known elementary particles before the discovery of quarks.
Standard model of elementary particles.
The fundamental fermions and the fundamental bosons. (c.2008)[note 101] Based on the proprietary publication, Review of Particle Physics.[note 102]
A step towards the Standard Model was Sheldon Glashow's discovery, in 1960, of a way to combine the electromagnetic and weak interactions.[127] In 1967, Steven Weinberg[128] and Abdus Salam[129] incorporated the Higgs mechanism[130][131][132] into Glashow's electroweak theory, giving it its modern form. The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions - i.e. the quarks and leptons. Also in 1967, Bryce DeWitt published his equation under the name "Einstein–Schrödinger equation" (later renamed the "Wheeler–DeWitt equation").[133] In 1969, Yoichiro Nambu, Holger Bech Nielsen, and Leonard Susskind descried space and time in terms of strings. In 1970, Pierre Ramond develop two-dimensional supersymmetries. Michio Kaku and Keiji Kikkawa would afterwards formulate string variations. In 1972, Michael Artin, Alexandre Grothendieck, Jean-Louis Verdier propose the Grothendieck universe.[134]
After the neutral weak currents caused by Z boson exchange were discovered at CERN in 1973,[135][136][137][138] the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The theory of the strong interaction, to which many contributed, acquired its modern form around 1973–74. With the establishment of quantum chromodynamics, a finalized a set of fundamental and exchange particles, which allowed for the establishment of a "standard model" based on the mathematics of gauge invariance, which successfully described all forces except for gravity, and which remains generally accepted within the domain to which it is designed to be applied. In the late 1970s, William Thurston introduced hyperbolic geometry into the study of knots with the hyperbolization theorem. The orbifold notation system, invented by Thurston, has been developed for representing types of symmetry groups in two-dimensional spaces of constant curvature. In 1978, Shing-Tung Yau deduced that the Calabi conjecture have Ricci flat metrics. In 1979, Daniel Friedan showed that the equations of motions of string theory are abstractions of Einstein equations of General Relativity.
The first superstring revolution is composed of mathematical equations developed between 1984 and 1986. In 1984, Vaughan Jones deduced the Jones polynomial and subsequent contributions from Edward Witten, Maxim Kontsevich, and others, revealed deep connections between knot theory and mathematical methods in statistical mechanics and quantum field theory. According to string theory, all particles in the "particle zoo" have a common ancestor, namely a vibrating string. In 1985, Philip Candelas, Gary Horowitz,[139] Andrew Strominger, and Edward Witten would publish "Vacuum configurations for superstrings"[140] Later, the tetrad formalism (tetrad index notation) would be introduced as an approach to general relativity that replaces the choice of a coordinate basis by the less restrictive choice of a local basis for the tangent bundle.[note 103][141]
In the 1990s, Roger Penrose would propose Penrose graphical notation (tensor diagram notation) as a, usually handwritten, visual depiction of multilinear functions or tensors.[142] Penrose would also introduce abstract index notation.[note 104] In 1995, Edward Witten suggested M-theory and subsequently used it to explain some observed dualities, initiating the second superstring revolution.[note 105]
John H Conway, prolific mathematician of notation.
John Conway would further various notations, including the Conway chained arrow notation, the Conway notation of knot theory, and the Conway polyhedron notation. The Coxeter notation system classifies symmetry groups, describing the angles between with fundamental reflections of a Coxeter group. It uses a bracketed notation, with modifiers to indicate certain subgroups. The notation is named after H. S. M. Coxeter and Norman Johnson more comprehensively defined it.
Combinatorial LCF notation[note 106] has been developed for the representation of cubic graphs that are Hamiltonian.[143][144] The cycle notation is the convention for writing down a permutation in terms of its constituent cycles.[145] This is also called circular notation and the permutation called a cyclic or circular permutation.[146]
Computers and markup notation[edit]
In 1931, IBM produces the IBM 601 Multiplying Punch; it is an electromechanical machine that could read two numbers, up to 8 digits long, from a card and punch their product onto the same card.[147] In 1934, Wallace Eckert used a rigged IBM 601 Multiplying Punch to automate the integration of differential equations.[148] In 1936, Alan Turing publishes "On Computable Numbers, With an Application to the Entscheidungsproblem".[149][note 107] John von Neumann, pioneer of the digital computer and of computer science,[note 108] in 1945, writes the incomplete First Draft of a Report on the EDVAC. In 1962, Kenneth E. Iverson developed an integral part notation that became known as Iverson Notation for manipulating arrays that he taught to his students, and described in his book A Programming Language. In 1970, E.F. Codd proposed relational algebra as a relational model of data for database query languages. In 1971, Stephen Cook publishes "The complexity of theorem proving procedures"[150] In the 1970s within computer architecture, Quote notation was developed for a representing number system of rational numbers. Also in this decade, the Z notation (just like the APL language, long before it) uses many non-ASCII symbols, the specification includes suggestions for rendering the Z notation symbols in ASCII and in LaTeX. There are presently various C mathematical functions (Math.h) and numerical libraries. They are libraries used in software development for performing numerical calculations. These calculations can be handled by symbolic executions; analyzing a program to determine what inputs cause each part of a program to execute. Mathematica and SymPy are examples of computational software programs based on symbolic mathematics.
Future of mathematical notation[edit]
Main article: Future of mathematics
A section of a quintic Calabi–Yau three-fold (3D projection); recalling atomic vortex theory.
In the history of mathematical notation, ideographic symbol notation has come full circle with the rise of computer visualization systems. The notations can be applied to abstract visualizations, such as for rendering some projections of a Calabi-Yau manifold. Examples of abstract visualization which properly belong to the mathematical imagination can be found in computer graphics. The need for such models abounds, for example, when the measures for the subject of study are actually random variables and not really ordinary mathematical functions.
See also[edit]
Main relevance
Abuse of notation, Well-formed formula, Big O notation (L-notation), Dowker notation, Hungarian notation, Infix notation, Positional notation, Polish notation (Reverse Polish notation), Sign-value notation, Subtractive notation, infix notation, History of writing numbers
Numbers and quantities
List of numbers, Irrational and suspected irrational numbers, γ, ζ(3), 2, 3, 5, φ, ρ, δS, α, e, π, δ, Physical constants, c, ε0, h, G, Greek letters used in mathematics, science, and engineering
General relevance
Order of operations, Scientific notation (Engineering notation), Actuarial notation
Dot notation
Chemical notation (Lewis dot notation (Electron dot notation)), Dot-decimal notation
Arrow notation
Knuth's up-arrow notation, infinitary combinatorics (Arrow notation (Ramsey theory))
Projective geometry, Affine geometry, Finite geometry
Lists and outlines
Outline of mathematics (Mathematics history topics and Mathematics topics (Mathematics categories)), Mathematical theories ( First-order theories, Theorems and Disproved mathematical ideas), Mathematical proofs (Incomplete proofs), Mathematical identities, Mathematical series, Mathematics reference tables, Mathematical logic topics, Mathematics-based methods, Mathematical functions, Transforms and Operators, Points in mathematics, Mathematical shapes, Knots (Prime knots and Mathematical knots and links), Inequalities, Mathematical concepts named after places, Mathematical topics in classical mechanics, Mathematical topics in quantum theory, Mathematical topics in relativity, String theory topics, Unsolved problems in mathematics, Mathematical jargon, Mathematical examples, Mathematical abbreviations, List of mathematical symbols
Hilbert's problems, Mathematical coincidence, Chess notation, Line notation, Musical notation (Dotted note), Whyte notation, Dice notation, recursive categorical syntax
Mathematicians (Amateur mathematicians and Female mathematicians), Thomas Bradwardine, Thomas Harriot, Felix Hausdorff, Gaston Julia, Helge von Koch, Paul Lévy, Aleksandr Lyapunov, Benoit Mandelbrot, Lewis Fry Richardson, Wacław Sierpiński, Saunders Mac Lane, Paul Cohen, Gottlob Frege, G. S. Carr, Robert Recorde, Bartel Leendert van der Waerden, G. H. Hardy, E. M. Wright, James R. Newman, Carl Gustav Jacob Jacobi, Roger Joseph Boscovich, Eric W. Weisstein, Mathematical probabilists, Statisticians
Further reading[edit]
1. ^ Or the Middle Ages.
2. ^ Such characters, in fact, are preserved with little alteration in the Roman notation, an account of which may be found in John Leslie's Philosophy of Arithmetic.
3. ^ Number theory is branch of pure mathematics devoted primarily to the study of the integers. Number theorists study prime numbers as well as the properties of objects made out of integers (e.g., rational numbers) or defined as generalizations of the integers (e.g., algebraic integers).
4. ^ Greek: μή μου τοὺς κύκλους τάραττε
5. ^ That is, .
6. ^ Magnitude (mathematics), the relative size of an object ; Magnitude (vector), a term for the size or length of a vector; Scalar (mathematics), a quantity defined only by its magnitude; Euclidean vector, a quantity defined by both its magnitude and its direction; Order of magnitude, the class of scale having a fixed value ratio to the preceding class.
7. ^ Autolycus' On the Moving Sphere is another ancient mathematical manuscript of the time.
9. ^ The expression:
would be written as:
SS2 C3 x5 M S4 u6
.[citation needed]
10. ^ such as the rule, square, compasses, water level (reed level), and plumb-bob.
11. ^ such as the wheel and axle
12. ^ The area of the square described on the hypotenuse of a right-angled triangle is equal to the sum of the areas of the squares described on the sides
13. ^ Al-Kindi also introduced cryptanalysis and frequency analysis.
16. ^ a book about what he perceived as flaws in Euclid's Elements, especially the parallel postulate
17. ^ translated into Latin by Robert of Chester
18. ^ translated in various versions by Adelard of Bath, Herman of Carinthia, and Gerard of Cremona
19. ^ His own personal use started around 1351.
20. ^ Summa de Arithmetica: Geometria Proportioni et Proportionalita. Tr. Sum of Arithmetic: Geometry in proportions and proportionality.
21. ^ Much of the work originated from Piero Della Francesca whom he appropriated and purloined.
22. ^ This was a special case of the methods given many centuries later by Ruffini and Horner.
23. ^ That is, .
24. ^ Because, it is thought, it resembled a lowercase "r" (for "radix").
25. ^ Published in Description of the Marvelous Canon of Logarithms
26. ^ That is,
27. ^ see Law of Continuity.
28. ^ Using Cartesian coordinates on the plane, the distance between two points (x1y1) and (x2y2) is defined by the formula:
which can be viewed as a version of the Pythagorean theorem.
29. ^ Further steps in abstraction were taken by Lobachevsky, Bolyai, Riemann, and Gauss who generalised the concepts of geometry to develop non-Euclidean geometries.
30. ^ Now called Pascal's triangle.
31. ^ For example, the "problem of points".
32. ^ That is, .
33. ^ For example,
34. ^ Original title, "De ratiociniis in ludo aleae"
35. ^ For example, the derivative of the function x would be written as . The second derivative of x would be written as , etc.
36. ^ For example, the derivative of the function x with respect to the variable t in Leibniz's notation would be written as .
37. ^ That is, .
38. ^ See also: List of representations of e
39. ^ Thus denotes the mathematical result of the performance of the operation upon the subject . If upon this result the same operation were repeated, the new result would be expressed by , or more concisely by , and so on. The quantity itself regarded as the result of the same operation upon some other function; the proper symbol for which is, by analogy, . Thus and are symbols of inverse operations, the former cancelling the effect of the latter on the subject . and in a similar manner are termed inverse functions.
40. ^ That is,
41. ^ That is,
42. ^ Today, the symbol created by John Wallis, , is used for infinity.
43. ^ As in,
44. ^ Capital-sigma notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, , an enlarged form of the upright capital Greek letter Sigma. This is defined as:
45. ^ That is, .
valid for n > 0.
46. ^ That is,
47. ^ Proportionality is the ratio of one quantity to another, especially the ratio of a part compared to a whole. In a mathematical context, a proportion is the statement of equality between two ratios; See Proportionality (mathematics), the relationship of two variables whose ratio is constant. See also aspect ratio, geometric proportions.
48. ^ The curly d or Jacobi's delta.
49. ^ About the proof of Wilson's theorem. Disquisitiones Arithmeticae (1801) Article 76
50. ^ Galois theory and Galois geometry is named after him.
51. ^ That is, "subset of" and "superset of"; This would later be redeveloped by Ernst Schröder.
52. ^ A science of numbers that uses methods from mathematical analysis to solve problems about the integers.
53. ^ quoted in Robert Percival Graves' "Life of Sir William Rowan Hamilton" (3 volumes, 1882, 1885, 1889)
54. ^ That is, (or, later called del, ∇)
55. ^ See Hamiltonian (quantum mechanics).
56. ^ That is,
57. ^ Though his use describes something different from what is now meant by a tensor. Namely, the norm operation in a certain type of algebraic system (now known as a Clifford algebra).
58. ^ That is,
59. ^ This is Latin for "womb".
60. ^ That is,
61. ^ Clifford intersected algebra with Hamilton's quaternions by replacing Hermann Grassmann's rule epep = 0 by the rule epep = 1. For more details, see exterior algebra.
62. ^ See: Phasor, Group (mathematics), Signal velocity, Polyphase system, Harmonic oscillator, and RLC series circuit
63. ^ Or the concept of a fourth spatial dimension. See also: Spacetime, the unification of time and space as a four-dimensional continuum; and, Minkowski space, the mathematical setting for special relativity.
64. ^ He famously said, "Mathematics is an experimental science, and definitions do not come first, but later on." He was replying to criticism over his use of operators that were not clearly defined. On another occasion he stated somewhat more defensively, "I do not refuse my dinner simply because I do not understand the process of digestion."
65. ^ See also: Mathematic fields and Field extension
66. ^ Comment after the proof that 1+1=2, completed in Principia mathematica, by Alfred North Whitehead ... and Bertrand Russell. Volume II, 1st edition (1912)
67. ^ This raises questions of the pure existence theorems.
68. ^ Peano's Formulario Mathematico, though less popular than Russell's work, continued through five editions. The fifth appeared in 1908 and included 4200 formulas and theorems.
69. ^ Inventor of set theory
70. ^ Transfinite arithmetic is the generalization of elementary arithmetic to infinite quantities like infinite sets; See Transfinite numbers, Transfinite induction, and Transfinite interpolation. See also Ordinal arithmetic.
71. ^ Such as Max Dehn, J. W. Alexander, and others.
72. ^ Such as the Alexander polynomial.
73. ^ (German: Algebraische Theorie der Körper)
74. ^ In this paper Steinitz axiomatically studied the properties of fields and defined many important field theoretic concepts like prime field, perfect field and the transcendence degree of a field extension.
75. ^ The indices range over set {1, 2, 3},
is reduced by the convention to:
Upper indices are not exponents but are indices of coordinates, coefficients or basis vectors.
See also: Ricci calculus
76. ^ Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields. See also: Synge J.L.; Schild A. (1949). Tensor Calculus. first Dover Publications 1978 edition. pp. 6–108.
77. ^ Here a logical constant is a symbol in symbolic logic that has the same meaning in all models, such as the symbol "=" for "equals".
A constant, in a mathematical context, is a number that arises naturally in mathematics, such as π or e; Such mathematics constant value do not change. It can mean polynomial constant term (the term of degree 0) or the constant of integration, a free parameter arising in integration.
Related, the physical constant are a physical quantity generally believed to be universal and unchanging. Programming constants are a values that, unlike a variable, cannot be reassociated with a different value.
78. ^ Though not an index term, keywords are terms that represent information. A keyword is a word with special meaning (this is a semantic definition), while syntactically these are terminal symbols in the phrase grammar. See reserved word for the related concept.
79. ^ Most of these symbols can be found in propositional calculus, a formal system described as . is the set of elements, such as the a in the example with Boolean algebra above. is the set that contains the subsets that contain operations, such as or . contains the inference rules, which are the rules dictating how inferences may be logically made, and contains the axioms. See also: Basic and Derived Argument Forms.
80. ^ Usually denoted by x, y, z, or other lowercase letters
Here a symbols that represents a quantity in a mathematical expression, a mathematical variable as used in many sciences.
Variables can be symbolic name associated with a value and whose associated value may be changed, known in computer science as a variable reference. A variable can also be the operationalized way in which the attribute is represented for further data processing (e.g., a logical set of attributes). See also: Dependent and independent variables in statistics.
81. ^ Usually denoted by an uppercase letter followed by a list of variables, such as P(x) or Q(y,z)
Here a mathematical logic predicate, a fundamental concept in first-order logic. Grammatical predicates are grammatical components of a sentence.
Related is the syntactic predicate in parser technology which are guidelines for the parser process. In computer programming, a branch predication allows a choice to execute or not to execute a given instruction based on the content of a machine register.
82. ^ Representing ALL and EXISTS
83. ^ e.g. ∃ for "there exists" and ∀ for "for all"
84. ^ See also: Dialetheism, Contradiction, and Paradox
85. ^ Related, facetious abstract nonsense describes certain kinds of arguments and methods related to category theory which resembles comical literary non sequitur devices (not illogical non sequiturs).
86. ^ Gödel's incompleteness theorems shows that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible, giving a contested negative answer to Hilbert's second problem
87. ^ For example, take the statement "There exists a number x such that it is not y". Using the symbols of propositional calculus, this would become: .
If the Gödel numbers replace the symbols, it becomes:.
There are ten numbers, so the ten prime numbers are found and these are: .
Then, the Gödel numbers are made the powers of the respective primes and multiplied, giving: .
The resulting number is approximately .
88. ^ The Klein–Gordon equation is:
89. ^ The Dirac equation in the form originally proposed by Dirac is:
where, ψ = ψ(x, t) is the wave function for the electron, x and t are the space and time coordinates, m is the rest mass of the electron, p is the momentum, understood to be the momentum operator in the Schrödinger theory, c is the speed of light, and ħ = h/2π is the reduced Planck constant.
90. ^ That is,
91. ^ The theorem applies more generally to any sufficiently strong formal system, showing that truth in the standard model of the system cannot be defined within the system.
92. ^ Named to honor Voigt's 1898 work.
93. ^ Named after Arthur Moritz Schoenflies
94. ^ See Galois connections.
95. ^ Oystein Ore would also write "Number Theory and Its History".
96. ^
97. ^ That the scattering amplitude can be thought of as an analytic function of the angular momentum, and that the position of the poles determine power-law growth rates of the amplitude in the purely mathematical region of large values of the cosine of the scattering angle.
98. ^ That is,
99. ^ Also known as the d'Alembertian or wave operator.
100. ^ Also known as, "permutation symbol" (see: permutation), "antisymmetric symbol" (see: antisymmetric), or "alternating symbol"
101. ^ Note that "masses" (e.g., the coherent non-definite body shape) of particles are periodically reevaluated by the scientific community. The values may have been adjusted; adjustment by operations carried out on instruments in order that it provides given indications corresponding to given values of the measurand. In engineering, mathematics, and geodesy, the optimal parameter such estimation of a mathematical model so as to best fit a data set.
102. ^ For the consensus, see Particle Data Group.
103. ^ A locally defined set of four linearly independent vector fields called a tetrad
104. ^ His usage of the Einstein summation was in order to offset the inconvenience in describing contractions and covariant differentiation in modern abstract tensor notation, while maintaining explicit covariance of the expressions involved.
105. ^ See also: String theory landscape and Swampland
106. ^ Devised by Joshua Lederberg and extended by Coxeter and Frucht
107. ^ And, in 1938, "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction" (Proceedings of the London Mathematical Society, 2 (1937) 43 (6): 544–6, doi:10.1112/plms/s2-43.6.544).
108. ^ Among von Neumann's other contributions include the application of operator theory to quantum mechanics, in the development of functional analysis, and on various forms of operator theory.
References and citations[edit]
1. ^ Florian Cajori. A History of Mathematical Notations: Two Volumes in One. Cosimo, Inc., Dec 1, 2011
2. ^ A Dictionary of Science, Literature, & Art, Volume 2. Edited by William Thomas Brande, George William Cox. Pg 683
3. ^ "Notation - from Wolfram MathWorld". Mathworld.wolfram.com. Retrieved 2014-06-24.
4. ^ Diophantos of Alexandria: A Study in the History of Greek Algebra. By Sir Thomas Little Heath. Pg 77.
5. ^ Mathematics: Its Power and Utility. By Karl J. Smith. Pg 86.
6. ^ The Commercial Revolution and the Beginnings of Western Mathematics in Renaissance Florence, 1300–1500. Warren Van Egmond. 1976. Page 233.
7. ^ Solomon Gandz. "The Sources of al-Khowarizmi's Algebra"
8. ^ Encyclopædia Americana. By Thomas Gamaliel Bradford. Pg 314
9. ^ Mathematical Excursion, Enhanced Edition: Enhanced Webassign Edition By Richard N. Aufmann, Joanne Lockwood, Richard D. Nation, Daniel K. Cleg. Pg 186
10. ^ Mathematics in Egypt and Mesopotamia[dead link]
11. ^ Boyer, C. B. A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach. New York: Wiley, 1989 ISBN 0-471-09763-2 (1991 pbk ed. ISBN 0-471-54397-7). "Mesopotamia" p. 25.
14. ^ Heath. A Manual of Greek Mathematics. p. 5.
16. ^ a b The new encyclopædia; or, Universal dictionary of arts and sciences. By Encyclopaedia Perthensi. Pg 49
17. ^ Calinger, Ronald (1999). A Contextual History of Mathematics. Prentice-Hall. p. 150. ISBN 0-02-318285-7. Shortly after Euclid, compiler of the definitive textbook, came Archimedes of Syracuse (ca. 287 212 BC), the most original and profound mathematician of antiquity.
18. ^ "Archimedes of Syracuse". The MacTutor History of Mathematics archive. January 1999. Retrieved 2008-06-09.
19. ^ O'Connor, J.J.; Robertson, E.F. (February 1996). "A history of calculus". University of St Andrews. Archived from the original on 15 July 2007. Retrieved 2007-08-07.
20. ^ "Proclus' Summary". Gap.dcs.st-and.ac.uk. Retrieved 2014-06-24.
23. ^ Mathematics and Measurement By Oswald Ashton Wentworth Dilk. Pg 14
24. ^ a b c d e A dictionary of science, literature and art, ed. by W.T. Brande. Pg 683
25. ^ Boyer, Carl B. A History of Mathematics, 2nd edition, John Wiley & Sons, Inc., 1991.
26. ^ Diophantine Equations. Submitted by: Aaron Zerhusen, Chris Rakes, & Shasta Meece. MA 330-002. Dr. Carl Eberhart. February 16, 1999.
27. ^ A History of Greek Mathematics: From Aristarchus to Diophantus. By Sir Thomas Little Heath. Pg 456
28. ^ A History of Greek Mathematics: From Aristarchus to Diophantus. By Sir Thomas Little Heath. Pg 458
29. ^ The American Mathematical Monthly, Volume 16. Pg 131
30. ^ "Overview of Chinese mathematics". Groups.dcs.st-and.ac.uk. Retrieved 2014-06-24.
33. ^ "Frank J. Swetz and T. I. Kao: Was Pythagoras Chinese?". Psupress.psu.edu. Retrieved 2014-06-24.
35. ^ Sal Restivo
37. ^ Marcel Gauchet, 151.
42. ^ Boyer, C. B. A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach. New York: Wiley, 1989 ISBN 0-471-09763-2 (1991 pbk ed. ISBN 0-471-54397-7). "The Arabic Hegemony" p. 230. (cf., "The six cases of equations given above exhaust all possibilities for linear and quadratic equations having positive root. So systematic and exhaustive was al-Khwārizmī's exposition that his readers must have had little difficulty in mastering the solutions.")
44. ^ Boyer, C. B. A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach. New York: Wiley, 1989 ISBN 0-471-09763-2 (1991 pbk ed. ISBN 0-471-54397-7). "The Arabic Hegemony" p. 229. (cf., "It is not certain just what the terms al-jabr and muqabalah mean, but the usual interpretation is similar to that implied in the translation above. The word al-jabr presumably meant something like "restoration" or "completion" and seems to refer to the transposition of subtracted terms to the other side of an equation; the word muqabalah is said to refer to "reduction" or "balancing" - that is, the cancellation of like terms on opposite sides of the equation.")
52. ^ Boyer, C. B. A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach. New York: Wiley, 1989 ISBN 0-471-09763-2 (1991 pbk ed. ISBN 0-471-54397-7). "Revival and Decline of Greek Mathematics" p. 178 (cf., "The chief difference between Diophantine syncopation and the modern algebraic notation is the lack of special symbols for operations and relations, as well as of the exponential notation.")
54. ^ Mathematical Magazine, Volume 1. Artemas Martin, 1887. Pg 124
55. ^ Der Algorismus proportionum des Nicolaus Oresme: Zum ersten Male nach der Lesart der Handschrift R.40.2. der Königlichen Gymnasial-bibliothek zu Thorn. Nicole Oresme. S. Calvary & Company, 1868.
57. ^ Later early modern version: A New System of Mercantile Arithmetic: Adapted to the Commerce of the United States, in Its Domestic and Foreign Relations with Forms of Accounts and Other Writings Usually Occurring in Trade. By Michael Walsh. Edmund M. Blunt (proprietor.), 1801.
58. ^ Miller, Jeff (4 June 2006). "Earliest Uses of Symbols of Operation". Gulf High School. Retrieved 24 September 2006.
59. ^ Arithmetical Books from the Invention of Printing to the Present Time. By Augustus De Morgan. p 2.
61. ^ Arithmetica integra. By Michael Stifel, Philipp Melanchton. Norimbergæ: Apud Iohan Petreium, 1544.
62. ^ The History of Mathematics By Anne Roone. Pg 40
63. ^ Memoirs of John Napier of Merchiston. By Mark Napier
64. ^ An Account of the Life, Writings, and Inventions of John Napier, of Merchiston. By David Stewart Erskine Earl of Buchan, Walter Minto
65. ^ Florian Cajori (1919). A History of Mathematics. Macmillan.
66. ^ Jan Gullberg, Mathematics from the birth of numbers, W. W. Norton & Company; ISBN 978-0-393-04002-9 . pg 963–965,
67. ^ Synopsis Palmariorum Matheseos. By William Jones. 1706. (Alt: Synopsis Palmariorum Matheseos: or, a New Introduction to the Mathematics. archive.org.)
68. ^ When Less is More: Visualizing Basic Inequalities.By Claudi Alsina, Roger B. Nelse. Pg 18.
69. ^ Euler, Leonhard, Solutio problematis ad geometriam situs pertinentis
70. ^ The elements of geometry. By William Emerson
71. ^ The Doctrine of Proportion, Arithmetical and Geometrical. Together with a General Method of Arening by Proportional Quantities. By William Emerson.
72. ^ The Mathematical Correspondent. By George Baron. 83
73. ^ Vitulli, Marie. "A Brief History of Linear Algebra and Matrix Theory". Department of Mathematics. University of Oregon. Retrieved 2012-01-24.
74. ^ "Kramp biography". History.mcs.st-and.ac.uk. Retrieved 2014-06-24.
75. ^ Mécanique analytique: Volume 1, Volume 2. By Joseph Louis Lagrange. Ms. Ve Courcier, 1811.
76. ^ The collected mathematical papers of Arthur Cayley. Volume 11. Page 243.
77. ^ Historical Encyclopedia of Natural and Mathematical Sciences, Volume 1. By Ari Ben-Menahem. Pg 2070.
78. ^ Vitulli, Marie. "A Brief History of Linear Algebra and Matrix Theory". Department of Mathematics. University of Oregon. Originally at: darkwing.uoregon.edu/~vitulli/441.sp04/LinAlgHistory.html
79. ^ The Words of Mathematics. By Steven Schwartzman. 6.
80. ^ Electro-Magnetism: Theory and Applications. By A. Pramanik. 38
81. ^ History of Nabla and Other Math Symbols. homepages.math.uic.edu/~hanson.
82. ^ Hamilton, William Rowan (1854–1855). Wilkins, David R., ed. "On some Extensions of Quaternions" (PDF). Philosophical Magazine (7–9): 492–499, 125–137, 261–269, 46–51, 280–290. ISSN 0302-7597.
83. ^ "James Clerk Maxwell". IEEE Global History Network. Retrieved 25 March 2013.
84. ^ Maxwell, James Clerk (1865). "A dynamical theory of the electromagnetic field" (PDF). Philosophical Transactions of the Royal Society of London. 155: 459–512. doi:10.1098/rstl.1865.0008. (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.)
85. ^ Proceedings of the London Mathematical Society, Volume 3. London Mathematical Society, 1871. Pg. 224
86. ^ Books I, II, III (1878) on the Internet Archive; Book IV (1887) on the Internet Archive
87. ^ The Heaviside Operational Calculus www.quadritek.com/bstj/vol01-1922/articles/bstj1-2-43.pdf
88. ^ Cox, David A. (2012). Galois Theory. Pure and Applied Mathematics. 106 (2nd ed.). John Wiley & Sons. p. 348. ISBN 1118218426.
89. ^ "TÜBİTAK ULAKBİM DergiPark". Journals.istanbul.edu.tr. Retrieved 2014-06-24.
90. ^ "Linear Algebra : Hussein Tevfik : Free Download & Streaming : Internet Archive". Archive.org. Retrieved 2014-06-24.
91. ^ Ricci Curbastro, G. (1892). "Résumé de quelques travaux sur les systèmes variables de fonctions associés à une forme différentielle quadratique". Bulletin des Sciences Mathématiques. 2 (16): 167–189.
92. ^ Voigt, Woldemar (1898). Die fundamentalen physikalischen Eigenschaften der Krystalle in elementarer Darstellung. Leipzig: Von Veit.
93. ^ Poincaré, Henri, "Analysis situs", Journal de l'École Polytechnique ser 2, 1 (1895) pp. 1–123
94. ^ Whitehead, John B., Jr. (1901). "Review: Alternating Current Phenomena, by C. P. Steinmetz" (PDF). Bull. Amer. Math. Soc. 7 (9): 399–408. doi:10.1090/s0002-9904-1901-00825-7.
95. ^ There are many editions. Here are two:
96. ^ Ricci, Gregorio; Levi-Civita, Tullio (March 1900), "Méthodes de calcul différentiel absolu et leurs applications" (PDF), Mathematische Annalen, Springer, 54 (1–2): 125–201, doi:10.1007/BF01454201
97. ^ Zermelo, Ernst (1904). "Beweis, dass jede Menge wohlgeordnet werden kann" (reprint). Mathematische Annalen. 59 (4): 514–16. doi:10.1007/BF01445300.
98. ^ Wikisource link to On the Dynamics of the Electron (July). Wikisource.
99. ^ Fréchet, Maurice, "Sur quelques points du calcul fonctionnel", PhD dissertation, 1906
100. ^ Cuthbert Edmund Cullis (Author) (2011-06-05). "Matrices and determinoids Volume 2: Cuthbert Edmund Cullis: Amazon.com: Books". Amazon.com. Retrieved 2014-06-24.
101. ^ Can be assigned a given matrix: About a class of matrices. (Gr. Ueber eine Klasse von Matrizen: die sich einer gegebenen Matrix zuordnen lassen.) by Isay Schur
102. ^ An Introduction To The Modern Theory Of Equations. By Florian Cajori.
103. ^ Proceedings of the Prussian Academy of Sciences (1918). Pg 966.
104. ^ Sitzungsberichte der Preussischen Akademie der Wissenschaften (1918) (Tr. Proceedings of the Prussian Academy of Sciences (1918)). archive.org; See also: Kaluza–Klein theory .
105. ^ J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 85–86, §3.5. ISBN 0-7167-0344-0.
106. ^ R. Penrose (2007). The Road to Reality. Vintage books. ISBN 0-679-77631-1.
107. ^ Schouten, Jan A. (1924). R. Courant, ed. Der Ricci-Kalkül – Eine Einführung in die neueren Methoden und Probleme der mehrdimensionalen Differentialgeometrie (Ricci Calculus – An introduction in the latest methods and problems in multi-dimmensional differential geometry). Grundlehren der mathematischen Wissenschaften (in German). 10. Berlin: Springer Verlag.
108. ^ Robert B. Ash. A Primer of Abstract Mathematics. Cambridge University Press, Jan 1, 1998
109. ^ The New American Encyclopedic Dictionary. Edited by Edward Thomas Roe, Le Roy Hooker, Thomas W. Handford. Pg 34
110. ^ The Mathematical Principles of Natural Philosophy, Volume 1. By Sir Isaac Newton, John Machin. Pg 12.
111. ^ In The Scientific Outlook (1931)
112. ^ Mathematics simplified and made attractive: or, The laws of motion explained. By Thomas Fisher. Pg 15. (cf. But an abstraction not founded upon, and not consonant with Nature and (Logical) Truth, would be a falsity, an insanity.)
113. ^ Proposition VI, On Formally Undecidable Propositions in Principia Mathematica and Related Systems I (1931)
114. ^ Casti, John L. 5 Golden Rules. New York: MJF Books, 1996.
115. ^ Gr. Methoden Der Mathematischen Physik
116. ^ P.A.M. Dirac (1927). "The Quantum Theory of the Emission and Absorption of Radiation". Proceedings of the Royal Society of London A. 114: 243–265. Bibcode:1927RSPSA.114..243D. doi:10.1098/rspa.1927.0039.
117. ^ E. Fermi (1932). "Quantum Theory of Radiation". Reviews of Modern Physics. 4: 87–132. Bibcode:1932RvMP....4...87F. doi:10.1103/RevModPhys.4.87.
118. ^ F. Bloch; A. Nordsieck (1937). "Note on the Radiation Field of the Electron". Physical Review. 52: 54–59. Bibcode:1937PhRv...52...54B. doi:10.1103/PhysRev.52.54.
119. ^ V. F. Weisskopf (1939). "On the Self-Energy and the Electromagnetic Field of the Electron". Physical Review. 56: 72–85. Bibcode:1939PhRv...56...72W. doi:10.1103/PhysRev.56.72.
120. ^ R. Oppenheimer (1930). "Note on the Theory of the Interaction of Field and Matter". Physical Review. 35: 461–477. Bibcode:1930PhRv...35..461O. doi:10.1103/PhysRev.35.461.
121. ^ Van der Waerden B.L. (1929). "Spinoranalyse". Nachr. Ges. Wiss. Göttingen Math.-Phys. 1929: 100–109.
122. ^ Veblen O. (1933). "Geometry of two-component Spinors". Proc. Natl. Acad. Sci. USA. 19: 462–474. doi:10.1073/pnas.19.4.462.
123. ^ PAM Dirac (1939). "A new notation for quantum mechanics". Mathematical Proceedings of the Cambridge Philosophical Society. 35 (3). pp. 416–418. doi:10.1017/S0305004100021162.
124. ^ H. Grassmann (1862). Extension Theory. History of Mathematics Sources. American Mathematical Society, London Mathematical Society, 2000 translation by Lloyd C. Kannenberg.
125. ^ Steven Weinberg (1964), The quantum theory of fields, Volume 2, Cambridge University Press, 1995, p. 358, ISBN 0-521-55001-7
126. ^ "The Nobel Prize in Physics 1965". Nobel Foundation. Retrieved 2008-10-09.
127. ^ S.L. Glashow (1961). "Partial-symmetries of weak interactions". Nuclear Physics. 22: 579–588. Bibcode:1961NucPh..22..579G. doi:10.1016/0029-5582(61)90469-2.
128. ^ S. Weinberg (1967). "A Model of Leptons". Physical Review Letters. 19: 1264–1266. Bibcode:1967PhRvL..19.1264W. doi:10.1103/PhysRevLett.19.1264.
130. ^ F. Englert; R. Brout (1964). "Broken Symmetry and the Mass of Gauge Vector Mesons". Physical Review Letters. 13: 321–323. Bibcode:1964PhRvL..13..321E. doi:10.1103/PhysRevLett.13.321.
131. ^ P.W. Higgs (1964). "Broken Symmetries and the Masses of Gauge Bosons". Physical Review Letters. 13: 508–509. Bibcode:1964PhRvL..13..508H. doi:10.1103/PhysRevLett.13.508.
132. ^ G.S. Guralnik; C.R. Hagen; T.W.B. Kibble (1964). "Global Conservation Laws and Massless Particles". Physical Review Letters. 13: 585–587. Bibcode:1964PhRvL..13..585G. doi:10.1103/PhysRevLett.13.585.
133. ^ http://www.physics.drexel.edu/~vkasli/phys676/Notes%20for%20a%20brief%20history%20of%20quantum%20gravity%20-%20Carlo%20Rovelli.pdf
134. ^ Bourbaki, Nicolas (1972). "Univers". In Michael Artin; Alexandre Grothendieck; Jean-Louis Verdier, eds. Séminaire de Géométrie Algébrique du Bois Marie - 1963-64 - Théorie des topos et cohomologie étale des schémas - (SGA 4) - vol. 1 (Lecture notes in mathematics 269) (in French). Berlin; New York: Springer-Verlag. pp. 185–217.
135. ^ F.J. Hasert; et al. (1973). "Search for elastic muon-neutrino electron scattering". Physics Letters B. 46: 121. Bibcode:1973PhLB...46..121H. doi:10.1016/0370-2693(73)90494-2.
136. ^ F.J. Hasert; et al. (1973). "Observation of neutrino-like interactions without muon or electron in the gargamelle neutrino experiment". Physics Letters B. 46: 138. Bibcode:1973PhLB...46..138H. doi:10.1016/0370-2693(73)90499-1.
137. ^ F.J. Hasert; et al. (1974). "Observation of neutrino-like interactions without muon or electron in the Gargamelle neutrino experiment". Nuclear Physics B. 73: 1. Bibcode:1974NuPhB..73....1H. doi:10.1016/0550-3213(74)90038-8.
138. ^ D. Haidt (4 October 2004). "The discovery of the weak neutral currents". CERN Courier. Retrieved 2008-05-08.
139. ^ http://web.physics.ucsb.edu/~gary/
140. ^ Nuclear Physics B 258: 46–74, Bibcode:1985NuPhB.258...46C, doi:10.1016/0550-3213(85)90602-9
141. ^ De Felice, F.; Clarke, C.J.S. (1990), Relativity on Curved Manifolds, p. 133
142. ^ "Quantum invariants of knots and 3-manifolds" by V. G. Turaev (1994), page 71
143. ^ Pisanski, Tomaž; Servatius, Brigitte (2013), "2.3.2 Cubic graphs and LCF notation", Configurations from a Graphical Viewpoint, Springer, p. 32, ISBN 9780817683641
144. ^ Frucht, R. (1976), "A canonical representation of trivalent Hamiltonian graphs", Journal of Graph Theory, 1 (1): 45–60, doi:10.1002/jgt.3190010111
145. ^ Fraleigh 2002:89; Hungerford 1997:230
146. ^ Dehn, Edgar. Algebraic Equations, Dover. 1930:19
147. ^ "The IBM 601 Multiplying Punch". Columbia.edu. Retrieved 2014-06-24.
148. ^ "Interconnected Punched Card Equipment". Columbia.edu. 1935-10-24. Retrieved 2014-06-24.
149. ^ Proceedings of the London Mathematical Society 42 (2)
External links[edit] |
5b91afc5583a9136 | Hellenica World
Vladimir Marchenko (born July 7, 1922) is a Ukrainian mathematician who specialized in mathematical physics, in particular in the analysis of the Sturm-Liouville and Schrödinger equations. Together with Leonid Pastur, Vladimir Marchenko discovered the Marchenko–Pastur law in random matrix theory.[1]
He has authored over 100 scientific articles, including 7 monographs. He defended his PhD thesis in 1951 and became professor at Kharkov State University in 1952.[2] In 1961 he joined the newly created Institute for Low Temperature, Physics and Engineering, where he took an active part in the organization of Mathematical Division. In 1962 he received the Volodymyr Lenin Prize from Ukraine.
1. ^ "V.A. Marchenko". http://kharkov.vbelous.net/english/vam/. Retrieved 2008-08-31.
2. ^ "Vladimir Aleksandrovich Marchenko". http://www-history.mcs.st-andrews.ac.uk/Biographies/Marchenko.html. Retrieved 2008-08-31.
Mathematics Encyclopedia
Retrieved from "http://en.wikipedia.org/" |
b65b2341688a2ed1 | torsdag 28 februari 2013
Basic Evidence of Low Emissivity of CO2
A basic estimate of the radiative forcing of CO2 as atmospheric trace gas from its main resonance at wave number 667, can be obtained as follows from Planck's Law in the form
• R(n,T) = gamma T n^2 for n < 4T,
where n is wave number, T ~ 300 K is temperature and R(n,T) is radiation per unit of wave number and gamma is a constant. The total outgoing long wave radiation OLR from the Earth surface at temperature T assuming the atmosphere to be transparent, is then equal to the integral of R(n,T) over 0< n < 4T:
• OLR = gamma * 64/3 T^4 .
Adding the trace gas CO2 will block radiation in an interval around 667 ~ 2T of width 1 (motivated on Computational Blackbody Radiation as a basic phenomenon of near-resonance), with a blocking effect B given by
• B = gamma * 4 T^3 .
The emissivity of an atmosphere with CO2 as trace gas can then be estimated as the relative blocking effect B/OLR = 12/64 T < 0.001 of the main resonance at 667.
This is similar to the estimate 0.002 presented in a previous post, with the doubling resulting from the weaker spectral lines away from the main resonance.
Notice that the blocking effect of the main resonance of CO2 is in principle independent of concentration, which can be seen as an extreme form of logarithmic dependence with full effect at saturation already for small concentration.
The radiative forcing corresponding to an emissivity of 0.002 will be smaller than 0.5 W/m2 (as 0.2% of a total of about 200 W/m2), which is a factor 10 smaller than the 3.7 W/m2 serving as the basis of CO2 alarmism predicted by Modtran.
With the emissivity from the main resonance at 667 very small, the 3.7 W/m2 must result from the Modtran models of line broadening for spectral lines on the "shoulders" of the spectrum away from 667. CO2 alarmism thus critically depends on theoretical models of a phenomenon, which is so subtle that experimental evidence appears to be impossible.
Basis of CO2 Alarmism = Modtran = 0
As CO2 global warming alarmism is losing momentum in the absence of any warming since 15 years with politicians turning to other nobel causes, it may now be possible to question the very basis of this movement which has threatened to throw humanity back to stone-age by tough regulations to "decarbonize" society.
The scientific evidence of the warming effect of the trace gas CO2 consists of theoretical predictions of the "radiative forcing" effect using models for radiative transfer such as Modtran based on spectral data from the data base Hitran. A version of Modtran can be run on the web, which makes it possible to test its performance, as we did in a previous post.
Modtran gives a "radiative forcing" of 3.7 W/m2 upon doubling of atmospheric CO2 from 300 ppm to 600 ppm. This serves as the starting point of CO2 global warming alarmism by giving the trace gas CO2 a substantial warming effect as a powerful "greenhouse gas" GHG. Experimental evidence of this is effect is lacking. Without the 3.7 W/m2 produced by Modtran, there would be no IPCC and no CO2 alarmism.
Can we then trust the Modtran prediction of radiative forcing of 3.7 W/m2?
Well, Modtran as radiative transfer model is a very simplistic model of the complex atmosphere. Modtran is further supposed to model the effect of a very small cause since CO2 is an atmospheric trace gas, and to capture a small cause requires high accuracy. The scientific warning signs are thus blinking for Modtran.
Let us here check how Modtran reacts to very low concentrations of CO2 from 0.001 ppm to 1 ppm as an extension of previous posts. We get for a standard atmosphere with CO2 as the only greenhouse gas present, the following total outgoing long wave radiation OLR in W/m2 for different ppm of CO2:
• OLR = 397.524 for 0 (ppm CO2)
• OLR = 397.524 for 0.001
• OLR = 397.21 for 0.01
• OLR = 396.582 for 0.05
• OLR = 395.64 for 0.1
• OLR = 392.814 for 0.5
• OLR = 390.616 for 1 (ppm CO2).
We see a radiative forcing of 0.3 W/m2 from 0 to 0.01 ppm, 2 W/m2 from 0 to 0.1 ppm and 7 W/m2 from 0 to 1 ppm. This is a substantial effect from a cause as small as one part in 100 million. It is hard to believe that this effect can be viewed as a scientifically evidenced real effect.
This test gives yet another reason to question Modtran as the basic scientific evidence of a global warming effect of CO2, for both climate alarmists and skeptics who have accepted Modtran as truth.
If climate skeptics would dare to take the step to question Modtran, that could very well be the final nail in the coffin of IPCC.
tisdag 26 februari 2013
IR-Photons as Optical Phonons as Waves
In climate science it is common to view radiative heat transfer as a two-way flow of IR-photons particles carrying lumps of energy back and forth between e.g. the Earth surface and the atmosphere.
This view lacks physics rationale because it includes heat transfer by IR-photons not only from warm to cold, but also form cold to warm in violation of the 2nd Law of Thermodynamics. The usual way to handle this contradiction is to say that the net transfer is from warm to cold, and so there is no violation of the 2nd Law. But this requires the two-way transfer to be connected which is in conflict with an idea independent two-way transfer.
On Computational Blackbody Radiation I present a model of radiative heat transfer which is based on a wave equation for a collection of oscillators with small damping subject to periodic forcing solved by finite precision computation. Fourier analysis show that the oscillators in resonance take on a periodic motion which is out-of-phase with the forcing, which connects to optical phonons as wave motion in an elastic lattice with large amplitude (as compared to acoustical phonons with smaller amplitude).
Optical phonons typically occur in a lattice composed of two atoms of different mass, one big and one small, which connects to the radiation wave model with small damping.
We thus find reason to view IR-photons as a wave phenomenon similar optical phonons, rather than as "particles".
The radiation wave model includes two-way propagation of waves but only one way transfer of heat energy as an effect of cut-off of high frequencies due to finite precision computation.
måndag 25 februari 2013
Difference between Climate Skeptics and Deniers
Climate skeptics like Lindzen, Singer and Spencer (and Monckton and WUWT and Lubos...) are skeptical to CO2 alarmism, but they are all eager to state that they understand that CO2 is a greenhouse gas GHG with in principle a warming effect (even if this effect is so small that it can never be observed). They are skeptical to all of the dogmas of CO2 alarmism of melting ice caps, rising sea level, bad weather and threatened ice bears, but there is one thing for which the skepticism is missing: CO2 as a GHG with some warming effect. Maybe 3.7 W/m2 upon doubling.
People questioning the warming effect of CO2 are called "climate deniers" and they are not highly valued by neither climate alarmists nor climate skeptics.
What is then the difference is between climate skeptics and climate deniers? Why are climate skeptics not skeptical to the capacity of CO2 to cause warming, when they are skeptical to just about everything else coming from the climate alarmist camp?
I think the answer is credibility, scientific credibility. If you like a denier questions everything from IPCC including CO2 as a GHG, then you could easily be viewed as a crank that does not understand anything at all and therefore can question everything. Like a fool asking more questions than many wise men can answer.
And you don't want to be (viewed as) a crank. To send the signal that your are not a crank, it may be a good idea to pretend to understand the deep physics of the spectrum of CO2 based on deep quantum mechanics by saying that you very well understand that certainly CO2 is a GHG, because advanced computer codes like Modtran produce spectra which can be interpreted this way.
The difference between climate skeptics and deniers thus seems to be that skeptics gain credibility by pretending to understand something, which may be incorrect, while deniers lose credibility by admitting to not to understand, what possibly cannot be understood.
PS Here is a recent statement by Spencer representative of a skeptics view on the radiative forcing of 3.7 W/m2 from doubled CO2:
• I’m just saying I think the no-feedback temperature response is pretty sound…although I admit it must be computed based upon theory, and can’t be observationally verified.
CO2 Radiative Forcing by Modtran/Hitran??
The warming effect of atmospheric CO2 as a "greenhouse gas" is evidenced by the atmospheric radiative transfer computer code Modtran based on the high-resolution transmission molecular absorption data base Hitran, while direct observational evidence is lacking. CO2 alarmism is thus based on Modtran/Hitran.
In a previous post I noticed that Modtran assigns 1 ppm of CO2 a radiative forcing or warming effect of 6 W/m2. A very big effect from a very small cause! It is indeed very difficult to believe that one CO2 molecule per one million of O2/N2 molecules can change anything observable. This is like changing one grain of sand in the above picture!
Clive Best shows in The CO2 GHE Demystified using a radiative transfer model similar to Modtran a radiative forcing effect upon doubling of CO2 from 300 ppm to 600 of 3.52 W/m2 in close correspondence with the 3.7 W/m2 put forward by IPCC. The radiative forcing is shown to result from an increase of the effective altitude of radiation around wave numbers 600 and 750, which are far out on the "shoulders" of the CO2 spectrum centered at the main resonance 667. This is again a big effect of a small cause, since the spectrum on the shoulders is very sparse. The model further shows that the main emission from the band around 667 without shoulders occurs from altitudes of 30-40 km in the very thin stratosphere.
In both cases CO2 is attributed strong absorptivity away from the main resonce at 667, with very sparse spectral lines depending on concentration. Both results are most remarkable as a big effect of a small cause and as such call for a thorough investigation of the validity of the underlying radiative transfer model as concerns the effect of an atmospheric trace gas.
söndag 24 februari 2013
2nd Coming of the 2nd Law
The 2nd Law of Thermodynamics has remained as a main mystery of physics ever since it was first formulated by Clausius in 1865 as non-decrease of entropy, despite major efforts by mathematical physicists to give it a rational understandable meaning.
The view today is, based on the work by Ludwig Boltzmann, that the 2nd Law is a statistical law expressing a lack of precise human knowledge of microscopic physics, rather than a physical law independent of human observation and measurement. This view prepared the statistical interpretation of quantum mechanics as the basis of modern physics.
Modern physics is thus focussed on human observation of realities, while classical physics concerns realities independent of human observation. To involve the observer into the observed makes physics subjective which means a depart from the essence of physics of objectivity. A 2nd Law based on statistics thus comes along with many difficulties, which ended Boltzmann's life, and it is natural to seek a formulation in terms of classical physics without statistics.
Such a formulation is given in Computational Thermodynamics based on the Euler equations for an ideal compressible gas solved by finite precision computation. In this formulation the 2nd Law is a consequence of the following equations expressing conservation of kinetic energy K and internal (heat) energy E:
• dK/dt = W - D
• dE/dt = - W + D
• D > = 0,
where W is work and D is nonnegative turbulent dissipation (rates). The crucial element is the turbulent dissipation rate D which is non-negative, and thus signifies one-way transfer of energy from kinetic energy K into heat energy E.
The work W, positive in expansion and negative in compression, allows a two-way transfer between K and E, while turbulent diffusion D >= 0 can only transfer kinetic energy K into heat energy E, and not the other way.
We compare dE/dt = - W + D or rewritten as dE/dt + W = D as an alternative formulation of the 2nd Law, with the classical formulation found in books on thermodynamics:
• dE + pdV = TdS = dQ
• dS > = 0,
where p is pressure, V is volume (with pdV corresponding to W), T is temperature, S is entropy and dQ added heat energy.
We see that D >= 0 expresses the same relation as dS >= 0 since T > 0, and thus the alternative formulation expresses the same effective physics as the classical formulation.
The advantage of the alternative formulation is that turbulent dissipation rate D with D >= 0 has a direct physical meaning, while the physical meaning of S and dS >= 0 has remained a mystery.
The alternative formulation thus gives a formulation in terms of physical quantities without any need to introduce a mysterious concept of entropy, which cannot decrease for some mysterious reason. A main mystery of science can thus be put into the wardrobe of mysteries without solution and meaning, together with phlogistons.
Notice the connection to Computational Blackbody Radiation with an alternative proof of Planck's radiation law with again statistics replaced by finite precision computation.
For a recent expression of the confusion and mystery of the 2nd Law, see Ludwig Boltzmann: a birthday by Lubos.
PS1 The reason to define S by the relation dE + pdV = TdS is that for an ideal gas with pV = RT this makes dS = dE/T + pdV/T an exact differential, thus defining S in terms of T and p. The trouble with S thus defined, is that it lacks direct physical meaning.
PS2 Lubos refers to Bohr's view of physics:
This idea has ruined modern physics by encouraging a postmodern form medieval mysticism away from rational objectivity as the essence of science, where the physical world is reduced to a phantasm in the mind of the observer busy counting statistics of non-physical micro states.
PS3 Recall that statistics was introduced by Boltzmann to give a mathematical proof of the 2nd law, which appeared to be impossible using reversible Newtonian micromechanics, followed by Planck to prove his law of radiation, followed by Born to give the multidimensional Schrödinger equation an interpretation. But this was overkill. It is possible to prove a 2nd law and law of radiation using instead of full statistics a concept of finite precision computation as shown in Computational Thermodynamics and Computational Blackbody Radiation, which maintains the rationalism and objectivity of classical mechanics, while avoiding the devastating trap of reversible micromechanics.
lördag 23 februari 2013
Dysfunctional Peer Review of New Science?
The scholarly peer review system may be functional for normal science or puzzle solving routine science in the sense of Kuhn, but is not well suited to handle non-normal new science challenging an existing paradigm. This is because any new idea poses a threat to existing normal science and as such often meets overly negative reviews by referees without sufficient knowledge of the novelty. Correct new science may thus get rejected without good reasons, but is also possible that incorrect new science can get accepted by uncritical referees.
Furrther, incorrect normal science may be perpetuated by the peer review system, because incorrect normal science can only be questioned by new science.
In short, the peer review system is not suitable to handle new science, because either (i) good articles are rejected on bad grounds, or (ii) bad articles are accepted without good grounds.
An example of new science is given by the article New Theory of Flight presented on The Secret of Flight. The article was rejected by AIAA Journal and is now under review by Journal of Mathematical Fluid Mechanics JMFM.
JMFM has a difficult case to handle: Referees from normal science of fluid mechanics are not eager to touch the article and if so the review will be negative because the existing paradigm is challenged. On the other hand referee's from outside the fluid mechanics community under AIAA may not be able to give a credible review.
The normal science of flight as an example of an incorrect theory formulated 100 years ago, which has survived as normal science in the absence of a correct theory, carried by the peer review system and AIAA.
One option in such a case would seem to be to publish the article without peer review and then open to discussion with participation from normal science.
PS The peer review system has been eroded by in particular IPCC using peers to uncritically promote publication of articles supporting IPCC's climate alarmism and to selectively stop publication of articles not supporting this message.
fredag 22 februari 2013
IR Photons as Phlogistons
A photon is as elementary particle the carrier of the electromagnetic force.
A phonon is as collective elastic excitation in a lattice of atom or molecules the carrier of sound, referred to as a "quasiparticle".
A phonon is a collective sound wave while a photon is a "light particle". In a previous post I considered an acoustic model of radiative heat transfer between the Earth surface and the atmosphere and outer space, in the form of a string instrument with energy transfer from string to soundboard to surrounding air.
It is common to describe infrared radiative heat transfer between two bodies as a two-way flow of IR photon particles carrying "energy quanta" back and forth between the bodies. I have argued that this view is non-physical in the sense that energy is supposed to be carried not only from warm to cold, but also from cold to warm which is in violation of the 2nd law of thermodynamics.
To understand that the particle view is non-physical, it is illuminating to consider a model of the string instrument where the concept of phonon wave is replaced by "phonon particle" as an acoustic counterpart to a photon particle. A "phonon particle" would thus be a form of elementary particle as "sound particle" and "carrier of sound (force)".
We would then view the sound produced by the string instrument as consisting of a two-way flow of phonons between string and soundboard and between soundboard and surrounding air. In this model the sound of the surrounding air would send phonons to the soundboard which would send phonons to the string. This would be in violation with our experience reflecting the 2nd law, that it is the string which makes the soundboard vibrate, which makes the sound wave in the air.
We understand that a phonon particle model of a string instrument is non-physical as violation of the 2nd law and thus misleading.
In the same way an IR photon particle model of infrared radiative heat transfer is non-physical as violation of the 2nd law and thus misleading. Yet this model underlies the idea of "backradiation" from the cold atmosphere the to warmer Earth surface, which is a central part of CO2 alarmism.
Such a photon theory postulating heat transfer by photon particles without mass, charge, color, odor or taste, can be compared with the phlogiston theory postulating that in all flammable materials there is present phlogiston, a substance without color, odor, taste, or weight that is given off in burning.
Notice that because of the long wave length of infrared radiation, and IR-photon is similar to a phonon and thus is better described as collective wave phenomenon than as discrete particle. Compare with previous post on the subject.
PS There is a connection between optical phonons as large amplitude out-of-phase wave vibration of a lattice of two different atoms with different mass (as compared to acoustic small amplitude in-phase vibration), and the analysis of blackbody radiation on Computational Blackbody Radiation with incoming and outgoing radiation out-of-phase (also characteristic of a string instrument designed to give large amplitude output).
torsdag 21 februari 2013
Summary of Big Bluff of Warming Effect of CO2
CO2 alarmism fostered by IPCC is based on a proclaimed "heat trapping" or "radiation blocking" effect of CO2 as atmospheric trace gas causing a "radiative forcing" of 3.7 W/m2 upon doubling of preindustrial concentration into 600 pm from todays 390 ppm.
The evidence of substantial "radiative forcing" of CO2 presented by IPCC consists of spectra of outgoing longwave radiation OLR and downwelling longwave radiation DLR produced by instruments specially designed to measure DLR and OLR.
In a recent sequence of posts on DLR and OLR as parts of Big Bluff, I have given evidence that
• DLR-measurement is based on a formula without physical reality.
• OLR-measurment shows non-physical strong emissivity of CO2 in the 600 - 800 band.
The main reason of CO2 alarmism as a proclaimed warming effect of CO2 supposedly supported by measurement of DLR and OLR, thus evaporates upon inspection into fabricated evidence without real physics, only fictional invented physics.
In this fabrication governmental institutions use instruments and software designed by commercial companies according to governmental specifications, to fabricate instrumental evidence of non-existing physics, such as DLR. The scale of this scientific fraud is unprecedented, with the DLR-Pyrgeometer Formula as a glaring example of unphysical fabricated physics.
Of course you may argue that because the fraud is unprecedented, it simply cannot be true that it is fabricated evidence, and thus it must be correct science. But this is not a scientific argument: The fact that the scale of the (possibly) fabricated evidence is beyond comparison, does not make it true.
String Instrument as Model of Blackbody Radiation 2
Connecting to an earlier post we consider, as a model of radiation from a gas, a string instrument consisting of a vibrating string spanned over bridges on a soundboard with
• soundboard = radiating gas
• forcing of soundboard by vibrating string through bridges = incoming radiation
• sound waves generated by vibrating soundboard = outgoing radiation.
A string instrument has the following properties illustrating basic aspects of radiation from a gas:
1. string is plucked into vibration
2. string vibration over a sequence of frequencies (harmonics) = incoming radiation
3. soundboard vibration in resonance with string vibration through bridges
4. soundboard vibration generates sound waves in surrounding air = outgoing radiation.
We note in particular the following aspects of the soundboard = radiating gas:
• passive: when picking up the string vibration by resonance = incoming radiation
• active: when generating sound waves = outgoing radiation.
Together gives the soundboard the role of passive mediator of string vibrations into sound waves, for selected frequencies. The flow of energy is from string to soundboard to surrounding air, without "backradiation" with the soundboard feeding energy into to string.
A radiating gas of low temperature like the atmosphere has a similar passive function of transferring heat energy by infrared radiation from the Earth surface into outer space, for selected resonance frequencies of "greenhouse gases". The heat transfer is one-way from a warm Earth surface into outer space passively mediated by atmospheric "greenhouse gases", without "backradiation" from cold atmosphere to warm Earth surface.
Recall that CO2 alarmism is based on downwelling longwave radiation DLR as "backradiation" from the atmosphere to the Earth surface, measured by special DLR-meters fabricated according to a formula defining DLR in terms of temperature. But the formula is non-physical and the measured DLR has no physical reality.
The above discussion describes a periodic state with the string continuously being fed energy so that a sustained sound is generated. The dynamics of the model includes momentary input of energy into the string by plucking followed by transfer of energy from the string into the soundboard followed by transfer of energy into sound waves. The analog dynamics of radiation can be described as follows:
• start-up with the string and soundboard at rest: Earth surface and atmosphere at 0 K
• plucking of string: Earth surface is heated (during daytime by the Sun)
• vibrating string makes soundboard vibrate by resonance: Earth surface heats atmosphere by radiation
• vibrating soundboard generates sound waves: atmosphere radiates to outer space
• string loses energy without renewed plucking = Earth surface is cooling during night.
A good instrument is made so that the string energy is transferred into the soundboard at rate giving the plucked note a sustain of proper duration, which is in principle controled by the masses of the string and soundboard, the tension of the string and the stiffness of the soundboard.
onsdag 20 februari 2013
Illuminating Model of Global Energy Balance
The radiative heat transfer from the Earth surface to outer space via the atmosphere can for resonant frequencies of the atmosphere be illustrated in a simple water flow model for two connected containers, with container 2 representing the Earth surface with the water level H_2 representing temperature, and container 1 representing the atmosphere of height/temperature H_1 < H_2 with an outlet representing outer space.
If the channel connecting 2 with 1 has the same dimension as the outlet of 1 and the channel flow Q is proportional to level difference, we have by conservation of water, normalizing the constant of proportionality to one:
• Q = H_2 - H_ 1
• Q = H_1
and thus Q = 0.5 x H_2. We compare with the situation with container 2 directly pouring into outer space (no atmosphere) which would give the double outlet flow 2 Q = H_2, as illustrated on top right. This would correspond to non-resonant frequencies for which the atmosphere is transparent.
Introducing 1 (atmosphere) between 2 (Earth surface) and outer space thus reduces the flow with a factor 2, the reduction coming from requiring the water to pass two channels instead of one.
The model exhibits the following fundamental aspects of heat transfer in the Earth-atmosphere system:
• One water flow from high (2) to low level (1): One-way heat transfer from warm Earth surface to colder atmosphere: No "back radiation".
• 1 as passive mediator between 2 and outer space reduces the flow: Decrease of outgoing long wave radiation OLR for resonant frequencies of the atmosphere. No decrease for non-resonant frequencies.
• 1 is a passive mediator in the sense that whatever it absorbs from 2 is emitted into outer space.
The total reduction of OLR or "radiative forcing" caused by radiation through the atmosphere, is then determined by the denseness of the resonant frequencies of the atmosphere. The trace gas CO2 has a very sparse spectrum which gives small "radiative forcing" (small emissivity), as shown in previous posts.
1. Warming effect of the atmosphere, acting as a passive intermediate "blanket" between the Earth surface and pure space, for resonant frequencies.
2. Non-warming effect for non-resonant frequencies.
3. Total warming effect dependent on denseness of resonant frequencies.
4. One-flow of heat energy from warm to cold.
5. Not two-way flow of heat energy carried by "photons" traveling back and forth.
For a new approach to radiative heat transfer connecting to this post, see Computational Blackbody Radiation.
CO2 alarmism is based on the hypothesis that the spectrum of the atmosphere with CO2 as a trace gas is dense in the entire wave number band 600 - 800, with a suggested "radiative forcing" of 3.7 W/m2 upon doubling of the concentration from preindustrial level to 600 ppm.
But the spectrum of CO2 as atmospheric trace gas is not dense but very sparse except in the narrow band 667 - 669, and thus the basis of "radiative forcing" of 3.7 W/m2 appears to be grossly incorrect, probably a factor 10 too big. Without this "radiative forcing" of 3.7 W/m2 CO2 alarmism collapses.
tisdag 19 februari 2013
Radiation of Solid vs Gas
A solid like a glowing lump of iron shows a continuous radiation emission spectrum in accordance with Planck's radiation law, while a gas shows an emission/absorption line spectrum with resonances at specific wave lengths, as illustrated above. What is then the difference between a solid and a gas, which generates different spectra?
The analysis presented on Computational Blackbody Radiation suggests the following answer: A solid can be modeled as continuous web or string of atoms which by collective vibration can generate a full sequence of harmonics with frequencies n ranging over the natural numbers n =1,2,3..., with higher frequencies like 10000, 10001, 10002, ... practically generating a continuum.
The acoustic analog is a vibrating guitar string capable of generating all harmonics corresponding to n =1, 2, 3, ..., because it can macroscopically be viewed as a continuum governed by a wave equation over a continuum of real numbers.
In this perspective a gas would be modeled instead as a finite collection of oscillators, each oscillator with a specific resonance frequency, thus with a discrete line spectrum. The coupling between molecules in solid allowing collective coordinated vibration generating a continuous spectrum, would thus be missing in a gas with the effect that the gas spectrum would be restricted to a discrete set of molecular resonances.
In short: The strong coupling of atoms in a solid allows collective coordinated vibration over a continuum of resonances, while the free flying atoms of a gas can only sustain discrete atomic or molecular resonances. In general the total emissivity of a solid is big and of a gas small.
For perspective, recall in particular the previous post on radiation and radiative heat transfer as a resonance phenomenon rather than an an exchange of energy carrying photons.
Low Emissivity of Atmospheric CO2: Hottel and Leckner
PS. More of the same here.
måndag 18 februari 2013
Modtran: High Emissivity of 1 ppm CO2!
The uchicago modtran solver produces the following outgoing long wave radiation OLR spectra for a dry atmosphere with varying concentrations of CO2: 0 ppm, 1 ppm, 400 ppm and 600 ppm:
We see an effect of reducing OLR from 379 W/m2 to 373 W/m2 by adding 1 ppm CO2 to a carbon free atmosphere: Thus a warming effect of 6 W/m2 by adding 0.0001% CO2 to a dry atmosphere!!
We see warming effect of about 2 W/m2 by increasing CO2 to 600 ppm from present 400 ppm.
We see the effect of 6 W/m2 from the ditch in the spectrum (second graph from top) centered at the main resonance of CO2 at wave number 667, developing by adding just 1 ppm of CO2!
We thus see a very big effect from a very small cause, which directly triggers sound scientific skepticism: If one grain of salt can change the world, then either 10 grains will end it, or the effect quickly saturates maybe to no effect by adding more. Since the first option is absurd, only the second is thinkable and this is the IPCC logarithmic saturation effect: 6 W/m2 from 0 to 1 ppm, and 2 W/m2 from 400 to 600 ppm.
We compare with the absorption spectrum for 1000 m of 1 ppm CO2 computed by spectralcalc showing extreme sparsity of the absorption away from the narrow interval 667-669:
We thus find good reason to question the spectra produced by Modtran which serve as the main scientific evidence of a warming effect of CO2.
söndag 17 februari 2013
Low Emissivity of Atmospheric CO2
Recent posts suggest a low emissivity of atmospheric CO2 away from its main resonance at wave number 667. To check let us compute the transmittance of a transparent atmosphere at 225 K after adding 400 ppm CO2 over a distance of 100 m at a pressure of 200 mb using the free version of the commercial software SpectralCalc to get the following spectrum with a blowup around 667:
We see that the transmittance is zero in the narrow interval 664 - 668, where 400 ppm CO2 makes the atmosphere opaque, while outside this interval the transmittance is low only in a small portion of the spectrum. In other words, the total emissivity of 400 ppm CO2 is very small which means that CO2 has a very limited capability to "block radiation" from the Earth surface, thus contradicting typical OLR spectra with full blocking in the interval 600 - 800.
Further, changing to 600 ppm gives almost the same transmittance spectrum, signaling small radiative forcing from doubling of the preindustrial concentration of CO2:
PS Further evidence is given by Ed Caryl.
String Instrument as Model of Blackbody Radiation
A string instrument like a guitar or piano offers a conceptual model of blackbody radiation which can help to remove the mystery surrounding this phenomenon. The sound of a string instrument is generated by plucking strings in contact through bridges with a soundboard which generates sound waves in the surrounding air. The basic mathematical model takes the form:
• wave equation for soundboard + acoustic damping force = string force,
where the acoustic damping force models the sound force output from the instrument and the string force is the force on the soundboard transmitted from a plucked string through bridges.
The analysis presented on Computational Blackbody Radiation shows the following fundamental relation as a consequence of resonance between sound board and string:
• output sound energy = string energy
which is to be compared with a case of non-resonance:
• output sound energy < < string energy.
We see that a soundboard in resonance with a string transmits the full string plucking energy into output sound energy, while in the case on non-resonance only a small fraction is transmitted, see PS below for some more details.
In blackbody radiation this phenomenon comes out as high emissivity in the case of resonance and low emissivity in the case of non-resonance.
For example, CO2 has a main resonance at wave number 667, which gives high emissivity for wave numbers close to 667 independent of concentration, but low emissivity away from 667.
CO2 alarmism is based on high emissivity of atmospheric CO2 in the whole wave number band 600 - 800, which however most likely is an incorrect assumption.
PS The analysis on Computational Blackbody Radiation exhibits a phenomenon of near-resonance under small acoustic damping with the string force being in-phase with the soundboard displacement (and thus out-of-phase with the soundboard velocity), as the key to a good instrument with string and soundboard working together to produce a good sound.
lördag 16 februari 2013
Behöver Mattekommissionen KTHs Rektor?
Investors vd Börje Ekholm, Per Adolfsson vd Microsoft, Tobias Krantz chef för utbildning Svenskt Näringsliv och KTHs rektor Peter Gudmundson m fl meddelar på Brännpunkt SvD 15/2 att Sverige behöver Mattekommissionen:
• Vi anser att svenska elevers matematikkunskaper är en avgörande fråga för Sveriges tillväxt och våra framtida möjligheter att bli en kunskapsnation i världsklass. Därför startar vi Mattekommissionen – ett brett samverkansinitiativ med elva representanter från utbildningsväsendet, forskningen och näringslivet som har som mål att höja alla elevers kunnande och intresse för matematik.
• Genom samverkan, påverkan och konkreta aktiviteter vill Mattekommissionens uppnå följande tre huvudmål:
• Stärka svenska elevers matematikkunnande och öka intresset för matematikintensiva utbildningar så att Sverige kan leva upp till EU-överenskommelsen om ökad antagning till naturvetenskapliga och tekniska studier.
• Lyfta svenska elevers lägstanivå i matematik, att inte nå de grundläggande kunskapskraven i matematik begränsar individens möjligheter i yrkesliv och privatliv.
• Stärka det översta kunskapsskiktet så att de högpresterande eleverna får möjligheten att utveckla sitt intresse för matematik och naturvetenskap.
• Regeringens satsning på Matematiklyftet räcker inte för att råda bot på de stora utmaningar som vi står inför. Mycket återstår att göra för att nå upp till en acceptabel nivå vad gäller lärarnas utbildning, fortbildningsmöjligheter och undervisning i matematik så att eleverna får en mer kreativ, kontextualiserad och laborativ lärmiljö.
Det är samma KTH-rektor som ht 2010 medelst en mediakampanj sågade Mathematical Simulation Technology MST mitt under pågående testkurs på KTH, beskrivet som KTH-gate, och därmed stoppade alla försök att reformera en i ofruktbara former stelnad matematikutbildningen vid KTH, och i Sverige.
Det är samma KTH-rektor som 2010 utfärdade totalförbud att använda MST på KTH därför att MST erbjöd "eleverna en mer kreativ, kontextualiserad och laborativ lärmiljö" vilket hotade status quo, och som upprepade denna bannbulla ht 2012.
Studenter och näringsliv behöver en modern reformerad matematikutbildning, men KTH levererar en omodern utbildning och motarbetar reform. KTH utgör en bromskloss för den förnyelse av innehåll och form av matematikundervisningen som skulle vara möjlig om matematik kopplades med IT, och som skulle vara Sverige till gagn.
Vad vill alltså KTHs rektor tillföra Mattekommissionen?
fredag 15 februari 2013
Fabricated Evidence of GHE from CO2
The blogosphere offers many attempts to explain the CO2 greenhouse effect GHE, since it is not well explained in the scientific literature and it is the basis of CO2 alarmism. We find on Barret Bellamy Climate The GH effect of CO2 the following outgoing long wave emission OLR spectra with varying concentrations of CO2 as a trace gas (from 0 to 1000 ppm with 390 the present level) computed by Modtran:
Figure 2APortions of the emission spectra of the atmosphere with varying concentrations of CO2(in ppmv as indicated in each portion). The portions of Planck curves are for comparative temperatures; from the top downwards the curves are appropriate for the temperatures 300 K, 280 K, 260 K, 240 K and 220 K. The horizontal axis gives the wavenumbers in cm-1
We see the ditch around the main resonance at 667 widening as the concentration of CO2 is increasing. We see that the effect from the present 390 to a doubling of preindustrial level to 600, is very barely noticeable as a slight widening of the ditch. Barrett Bellamy offers the further remarkable information about the total emissions:
• The spectral portions show only the emissions from water vapour and CO2when it is present. Consider the emission when CO2 is absent and assume that the global mean temperature is 280 K (7°C). The radiance to space is estimated to be 286.2 W/m2, considerably greater than the value required for radiative balance (235 W/m2).
• Adding just 1 ppmv of CO2produces a noticeable effect and the Q branch of the spectrum is particularly obvious. The estimated radiance to space is 281.7 W/m2, a reduction of 4.5 W/m2. Such an atmosphere would be radiating less energy to space and the system as a whole would be warmer. Even 1 ppmv of CO2has a warming effect!
We read that even 1 ppm (0.0001%) of CO2 added to a carbon free atmosphere would have a warming effect or "radiative forcing" of 4.5 W/m2. Amazing!
We read that the total effect of the present 390 ppm of CO2 is about 50 W/m2, about 20% of the total forcing from the Sun. Remarkable. (It is stated that Modtran with 390 ppm gives OLR of 258.7 W/m2 to be compared with 235 for radiative balance, suggesting that something is wrong).
Both 4.5 W/m2 for 1 ppm and 50 (or 28) W/m2 for 390 ppm signal a big effect of CO2 as a trace and thus serve as the chief scientific evidence of the existence of a GHE from CO2. Not surprising it is also claimed that doubling to 600 would give a radiative forcing of about 4 W/m2, which fits with the canon defined by IPCC.
But is the evidence credible? Well, the numbers are computed by the commercial software Modtran marketed by Spectral Sciences Incorporated. The numbers are not supported by direct observation. The numbers are surprisingly big and against all forms of physics intuition of the possible effect of an trace gas: 4.5 W/m2 from 1 ppm simply seems impossible!
In a sequence of posts on OLR I have questioned to ability of a small presence of CO2 to block the radiation from the Earth surface in the entire interval 600 - 800 represented by the ditch. The analysis of blackbody radiation presented on Computational Blackbody Radiation suggests that CO2 even as a trace gas can absorb and emit radiation in a narrow band around its main resonance at 667, but that the emissivity is small away from 667. The analysis thus gives mathematical support of the intuitive conviction that 1 ppm of CO2 cannot cause a radiative forcing of 4.5 W/m2.
We are thus led to suspect that Modtran does not give a correct description of atmospheric radiation, and therefore that the main evidence of a GHE from CO2 is fabricated incorrect evidence.
A similar attempt to justify a GHE from CO2 is given on The Science of Doom. It is a remarkable that the most serious attempts to prove GHE are those made by amateur bloggers.
It is also remarkable that virtually nobody seems to be willing to question the evidence of GHE supplied by Modtran, as if Modtran cannot be questioned. Maybe it is like Coca Cola,which with its secret recipe cannot be questioned.
onsdag 13 februari 2013
Mathematics Reform Initialized in Estonia
WSJ reports on reform of mathematics education combining the power of the human brain and the computer:
• Schoolchildren in 30 schools in Estonia may be able to escape this misery, as they will shortly be embarking on a pilot for a new way of learning mathematics, computer-based mathematics, that reduces the emphasis on computation — doing sums — and increases the emphasis on understanding the uses of mathematics in real-world examples (“should I insure my mobile, how long will I live, or what makes a beautiful shape”).
• Estonia has been at the forefront of education reform. Last year it rolled out a trial to teach children from as young as seven robotics and how to program.
This is a signal to launch Mathematical Simulation Technology.
The "Hockey Stick" of the OLR Spectrum
As CO2 global warming alarmism is now losing credibility after 15 years of stationary temperatures and emerging fear of a coming ice age, it becomes possible for the first time scrutinize the core scientific evidence of the warming effect of CO2, which has been accepted by leading skeptics including Lindzen, Spencer, Singer and WUWT, namely the ditch between wave number 600 and 800 in the outgoing long wave radiation OLR spectrum produced by the IRIS and AIRS infrared spectrometers carried by satellites:
It is difficult to question evidence of this form, which bears the sign of hard physics as precise numbers produced by elaborate expensive instrumentation, because it requires knowledge of both instrument and processing of directly measured data, which both can be hidden in difficult technicalities.
Therefore the ditch in the spectrum, interpreted as a warming effect or "radiative forcing" from atmospheric CO2, has served CO2 alarmism as an "undeniable scientific fact" which cannot be questioned, with a warming effect of about 1 C upon doubling of atmospheric CO2. To refer to the spectrum has come to signify a deeper insight carried by both alarmists and skeptics, hidden to ordinary people not used to read spectra.
To question this "undeniable scientific fact" makes you into a "denier" destined to dwell on one of the lowest levels of Dante's Purgatorium.
In any case I have done so in a sequence of posts on OLR and I have come to the conclusion that the ditch in the spectrum attributed to CO2 is a misrepresentation of reality, or fabrication of fake evidence, similar to that of the "hockey stick", which started the fall of global warming alarmism.
I hope that skeptics are now read to question the OLR spectrum as the key evidence of CO2 warming with the same ardor as in the case of the hockey stick. Physicists have a special responsibility because the OLR spectrum is physics and not climate science.
But to be honest, very few seem to be interested in discussing the OLR spectrum, as if it is given once and for all by some superhuman intellect and thus beyond human understanding and scrutiny. But it is fabricated by people like you and me, and since it is the very basis of CO2 alarmism maybe some day someone will picks the thread. Since the first "hockey stick" attracted so much attention, may this new "hockey stick", if it is a "hockey stick", will deserve some as well.
Anyway, here are the key questions:
• How was the OLR spectrum produced?
• What was directly measured by the sensors, and what was computed in post processing?
• Does the spectrum describe reality with radiation "blocked" by CO2?
These are precise scientific questions which can be answered by using basic physics and mathematics, if only there is an interest in doing so. Alarmists are not interested but skeptics should be.
Maybe Fred Singer will then find reason to reconsider his message:
Singer is convinced that "CO2 certainly is a greenhouse gas", probably because he takes for granted that the OLR spectrum is correct science. Singer is a physicist and should be able upon close inspection to tell if it is or not.
If it is impossible to detect that "CO2 is a greenhouse gas", then it would be incorrect physics to declare that in any case "CO2 is a greenhouse gas". If ghosts cannot be detected, then it is not correct physics to nevertheless declare that "there certainly are ghosts" but they are "so small we cannot detect them". Right Fred?
PS1 WUWT reports:
Apparently, Luetkemeyer has read the OLR spectrum and understood that the science is dubious...
PS2 Fred does not seem to be willing to answer my question about the reality of the OLR spectrum, but I think the question asks for an answer.
tisdag 12 februari 2013
OLR Spectra Decoded as Fake!?
Consider the following outgoing long wave radiation OLR spectrum delivered by the Atmospheric Infrared Sounder AIRS flying on the Aqua satellite:
The graphs show the brightness temperature as function of wave number, with the brightness temperature the temperature of a blackbody with the same radiance at a given wave number as that recorded by the spectrometer (in principle bolometer) sensor, with in particular a zoom of the wave number interval 645 - 685 containing the main resonance at 667 of CO2.
We see a low brightness temperature of about 220 K and a peak at the main resonance at 667 of 250 K. Both these brightness temperatures are lower than the temperature of 295 K in the atmospheric window between 800 and 1200, as if the sensor has recorded the presence of CO2 both at the tropopause (220 K) and in the middle of the troposphere (250 K).
We understand that a bolometer sensor measures radiance calibrated to blackbody radiance and thus cannot distinguish between low emissivity/high temperature and high emissivity/low temperature. This means that the assignment of brightness temperature is influenced by an unknown emissivity, which explains why the assigned brightness temperature is high at the main resonance 667 for which the emissivity is high, and low in the weak resonances surrounding the main resonance for which the emissivity is low. But it does not make sense that CO2 radiates from different temperatures for different frequencies, because all frequencies are assumed to have the same temperature.
The spectrum constructed is thus an artificial spectrum reflecting the sensitivity of the bolometer, which may be chosen so that resonances of CO2 are picked up before the continuous spectrum from the Earth surface, and then are assigned brightness temperatures according to radiance, as shown above. The weak resonances with low total emissivity of CO2 away from 667, are then assigned a low brightness temperature (220 K) at full emissivity, as if all of the radiation from the Earth surface was blocked in the whole interval 600 - 800.
The OLR spectrum delivered by AIRS is thus an artificial spectrum constructed so as to hide that away from the main resonance 667, CO2 has small emissivity and thus cannot block all of the radiation from the Earth surface (even under doubled concentration from preindustrial level).
OLR spectra delivered by AIRS (and IRIS) are viewed as the key evidence of "heat trapping" or "radiation blocking" by atmospheric CO2. If these OLR spectra show to be fakes misrepresenting physics, the main scientific argument of CO2 alarmism evaporates. So what does true science tell: fake or not fake?
PS To help discussion, recall the sparseness of the CO2 spectrum around 667 as pictured in the previous post.
söndag 10 februari 2013
Model of Atmosphere with CO2 shows Small Emissivity
Computed transmittance of atmosphere with CO2 with main resonance at wave number 667. Notice that the atmosphere is effectively transparent, except in a small interval around 667. The effect of CO2 is thus small.
Here is a another argument indicating that the effect of the atmospheric trace gas CO2 on the radiation balance of the Earth is small.
We recall the model of blackbody radiation studied on Computational Blackbody Radiation as a collection of oscillators with small damping with equal oscillator internal energy T representing temperature, with oscillator resonance frequencies n varying from 1 to a cut-off set at T and each oscillator radiating
• E_n = gamma T n^2
where gamma is a universal constant, which is Planck's law. Summing over n from 1 to T, we obtain the total radiance
• E = sum_n gamma T n^2 = sigma T^4
which is Stefan-Boltzmann's law with sigma = gamma/3. In the case of only one resonance frequency n = T, the radiance would be reduced to
• e = gamma T T^2 = gamma T^3 ~ E/T
with the reduction factor 1/T.
The radiance of an atmosphere which is fully opaque over the entire spectrum would radiate E, while an atmosphere opaque only for a specific frequency near cut-off T, would radiate e ~ E/T with a reduction factor 1/T.
We conclude that the emissivity of transparent atmosphere with a trace gas like CO2 with only a few isolated resonances, would scale like 1/T and thus be small as soon as T is bigger than say 100 K.
We thus find theoretical evidence from a basic model that the emissivity of the Earth's atmosphere with the trace gas CO2 would be small, and thus that CO2 would have little effect on the Earth's radiation balance.
Note that the sparseness of CO2 as a trace gas, gets expressed as a spareseness of absorption spectrum rather than small mass fraction because of the universality of blackbody radiation as being independent of the mass of the oscillators.
torsdag 7 februari 2013
Story of Vanishing Evidence of CO2 Warming
In the late 20 century as the cold war was coming to an end, the fear of nuclear war was replaced by a fear of anthropogenic global warming caused by emissions of CO2 from burning of fossil fuels.
Governments united under UN gave massive support to science for producing evidence of global warming by CO2, evidence which governments could use to motivate a special tax on CO2, which would lead society into a new world without CO2-emissions, with lots of tax money to be spent by politicians.
Scientists started, with great enthusiasm stimulated by generous government grants, searching for evidence of a warming effect of CO2 in the form of a "greenhouse effect" with CO2 named "greenhouse gas" because of its capability of absorbing and emitting infrared radiation at isolated specific frequencies, because of its unsymmetric molecular structure, as observed in an early study by the Swedish Nobel Laureate Svante Arrhenius from 1896.
But finding evidence was difficult because CO2 is an atmospheric trace gas (now 390 ppm) and it is always difficult to find evidence of a big effect from a small cause, because this can only happen in unstable systems and such systems have no permanence allowing study.
An idea which developed was that doubling of CO2 would cause a "radiative forcing" of extra 2 - 4 W/m2 to be added to the 240 W/m2 received from the Sun, which was connected to a global warming of 1 C. Of course 1 C would be barely noticeable, but that was the best that could be squeezed out of CO2 alone, but with feedback from the system under "radiative forcing" from doubled CO2, the warming could be inflated to 3 C, which would enough to motivate a tax on CO2 to save the world.
The whole story of global warming by CO2 then rested on finding evidence of "radiative forcing" of 2 - 4 W/m2 from doubled CO2 and scientists were ordered to construct instruments for recording radiation spectra, which could show this effect.
But this turned out to be virtually impossible, because 2 - 4 W/m2 was too small to be measured: It required an accuracy below 1% which was impossible to achieve in practice. The difficulty of a big effect from a small cause showed its real face.
To counter this "small cause - big effect" syndrome, new physics of "back radiation" was invented showing a warming of 300 W/m2 of the Earth surface from the colder atmosphere as a big cause, but the new physics violated the 2nd law of thermodynamics and thus belonged to fiction.
What then remained of scientific support of CO2 alarmism based on observation, was a recorded global warming 1970 - 1998 of 0.5 C, which was connected to an increase of atmospheric CO2 from 330 to 370 ppm during the same time.
But this connection disappeared during 1998 - 2013, which went by without global warming, while CO2 continued to increase to 390 ppm.
The evidence of global warming by CO2 has thus today crumbled to nil, which presents a difficult case for the new report by IPPC under UN in charge of CO2 alarmism, to be presented in September. Without scientific support, a CO2 tax cannot be motivated and the whole BIG BLUFF project of unprecedented dimensions, collapses.
... and All the King's Horses and all the King's Men Couldn't put Humpty together again...
This is a summary of a study I was led into starting in 2009 and I have reported on this blog.
Radiative Heat Transfer as Resonance Phenomenon
The analysis of blackbody radiation exposed on Computational Blackbody Radiation suggests that radiative heat transfer is a phenomenon of near-resonance between bodies communicating through electromagnetic waves combined with a phenomenon of high-frequency cut-off, which effectively leads to one-way heat transfer from warm o cold.
Vieving radiative heat transfer this way removes the non-physical aspects which appear when viewing radiative heat transfer as a two-way exchange of photon particles carrying heat energy back and forth from warm to cold and from cold to warm. The latter view is common in e.g. climate science with in particular downwelling long wave radiation DLR from a cold atmosphere supposedly warming the Earth surface. The non-physical aspects concern the idea of infrared photons and violation of the 2nd law in heat transfer from cold to warm.
The model analyzed on Computational Blackbody Radiation consists of a system of bodies with each body consisting of a set of oscillators subject to small radiative damping, which communicate by sharing a common force carried as an electromagnetic wave. In equilibrium the bodies share a common temperature and there is no heat transfer between the bodies.
Each body is like a radio receiver/sender communicating with the other bodies through resonance transmitted by a force carried by electromagnetic waves, thus interacting over distance by resonance.
If one body is heated (e.g. internally), then its oscillator amplitude increases and so the corresponding balancing force and the residual force is transmitted to the other bodies which in resonance restore force balance reaching a common temperature. The result is that the heated body transfers heat energy to the surrounding colder bodies, by resonance over distance.
With this view, the functioning of an infrared thermometer can be understood as a set of oscillators which by resonance assumes the same temperature as a target at distance.
Similarly a selective infrared thermometer can be conceptualized as an oscillator with a specific resonance frequency with capability of at distance measuring at the temperature of a body with the specific resonance. It will operate like a sensitive radio sensitive receiver which can tune in on a weak sender at a specific frequency.
The Interferometric Reflectance Imaging Sensor IRIS carried by the Nimbus 4 satellite can be seen as such a selective infrared thermometer capable of measuring the temperature of the atmospheric trace gas CO2 through its main resonance at wave number 667, which produced the following spectrum supposedly demonstrating the warming effect of CO2 as the ditch around 667:
But as discussed in previous posts on emissivity, it is not all clear that the above spectrum constructed from measuring the temperature of the trace gas CO2 describes the emission spectrum of the Earth + atmosphere in the range of resonance of CO2. Most likely, it does not.
PS Here is a transmittance spectrum of CO2 from Scienceofdoom computed with spectralcalc illustrating the sparseness of the absorption away from 667. It does not seem plausible that the transmittance of a O2 - N2 atmosphere with a trace of CO2 is close to zero in the whole interval 600 - 800.
Here is a close-up of computed transmittance through 1 m atmosphere with typical CO2 concentration at 0.1 bar showing the sparseness of absorption:
onsdag 6 februari 2013
Inflated Modtran Effect of Atmospheric Trace Gases
NASA is now spending big money on projects such as CERES to find instrumental evidence of anthropogenic global warming AGW, but finds nothing. It is similar to the fruitless efforts by CIA to find evidence of weapons of mass destruction WMD in Saddam Hussein's Iraq. Is this an expression of paralyzed US politics?
LB Överger IPCC: När Kommer KVAs Avbön?
Katternö-tidningen rappoterar att Lennart Bengtsson är på god väg att överge den CO2 alarmism han tidigare har har framfört för som ansvarig för KVAs uttalande till stöd för IPCC:
• Temperaturhöjningen är så liten att någon knappast skulle märka den, om inte vi meteorologer hade upplyst om saken.
• Jag vill snarare jämföra med katolska kyrkans medeltida avlatsbrev, vilket var ett effektivt sätt att få en förskrämd allmänhet att betala för att undslippa helvetets fasor. Dåtidens katolska kyrka visade här stor skicklighet. Vi får vara tacksamma att Luther lyckades få stopp på detta oskick, åtminstone i våra protestantiska trakter.
• Jag är inte bara surprised, jag är astonished!
• Att oroa sig för att Antarktis är på väg att smälta är nästan på samma nivå som att oroa sig för att jorden och Venus kan komma att kollidera inom så där en miljard år [vilket vissa modellberäkningar visar].
Nu väntar vi bara på att LB skriver om KVAs uttalande från stöd till avståndstagande från IPCCs CO2 alarmism. När kommer detta LB? Sveriges folk och regering väntar på besked för att kunna gå vidare!
Kanske t o m LB skulle kunna börja uppskatta en person som jag, och inte bara anse att det jag skriver är "rappakalja" enligt tidigare uttalande i DN? Kanske klockan har gått ett varv.
CERES and Radiative Forcing
The Earth's energy budget according to CERES showing unphysical "back radiation" of 340 W/m2
An overview of the CERES project is given in CERES, a review: Past, present and Future (2011):
• The Clouds and Earth Radiant Energy System (CERES) project’s objectives are to measure the reflected solar radiance (shortwave) and Earth-emitted (longwave) radiances and from these measurements to compute the shortwave and longwave radiation fluxes at the top of the atmosphere (TOA) and the surface and radiation divergence within the atmosphere. The fluxes at TOA are to be retrieved to an accuracy of 2%.
• The first objective of CERES is to measure OLR radiances to an accuracy of 1% and reflected solar radiances to 2%. The global mean OLR flux is approximately 240 W/m2, so the requirement is 2.4 W/m2 accuracy. Likewise the global mean reflected flux is 100 W/m2, thus the requirement for shortwave flux is 2 W/m2.
We conclude that the accuracy of CERES even at the lower level of 1% for OLR, would not be good enough to detect effects of "radiative forcing" by CO2.
That the "radiative forcing" of doubled CO2 would be 3.7 W/m2 is a wild guess by IPCC without experimental support, which serves as the following cornerstone of CO2 alarmism:
Here the "easy undisputed calculation" is dQ = 4 dT, where dQ is "radiative forcing" and dT corresponding global warming as a differentiated form of Stefan-Boltzmann's Law Q = sigma T^4.
The number 3.7 or 4 W/m2 is so chosen by IPCC that the global warming would be 1 C, according to the "easy undisputed calculation", which is not too big to not be possible and not too small to be negligible. The basic argument of CO2 alarmism is that with feedback 1 C can become 3 C which is alarming.
The whole idea of "radiative forcing" lacks sound physics basis since the forcing comes from the Sun and only from the Sun, and this is reflected by the fact that it cannot be discovered by instruments measuring physical phenomena, that is, it is fiction of a BIG BLUFF.
PS CERES is selling itself e.g. on a video telling us
• When you add greenhouse gasses such as CO2 and methane you change that radiation balance on the top of the atmosphere and you change the amount of outgoing radiation so that imbalance means more energy in the system...part of it goes into the ocean...and part of goes into actually warming the Earth...all of those things should give you a coherent picture of how things are changing as we warm the climate...
The purpose is obvious: Use CERES to show global warming by radiative forcing from CO2. The only trouble is that CERES shows nothing of this sort...all attempts to measure radiative forcing by CO2 seem to fail miserably...the scale of the BIG BLUFF with its organized governmental science support, is really impressive...
tisdag 5 februari 2013
CERES: No Global Warming Detected
In previous posts we have decoded the BIG BLUFF of CO2 alarmism based on fabricating evidence of a global warming effect of CO2 in large scale experimental projects such as ERBE followed by CERES designed to produce measurements of radiative fluxes. While ERBE focussed on outgoing long wave radiation OLR as one-way heat flux from the Earth + atmosphere into outer space at 3 K, which is physical heat flux, CERES has lifted the ambition to include two-way heat fluxes between different parts of the atmosphere and the Earth surface, which lack physical reality:
The objective of CERES is described as follows on Daily Press Febr 14 2012:
• Clouds and the Earth's Radiant System CERES is one of five earth science experiments aboard Suomi NPP, a $1.5 billion satellite NASA launched into space Oct. 28 2012 from Vandenberg Air Force Base in California.
• Managed at NASA Langley Research Center in Hampton, CERES measures the amount of sunlight that enters Earth and how much sunlight and thermal radiation is reflected back to space. The concept is known as Earth's energy budget.
• According to NASA, the sun annually provides the planet about 340 watts per square meter — roughly the energy radiated from six incandescent light bulbs. If the planet returned an equal amount of energy to space, temperatures would be constant, Loeb said.
• That is not occurring. Instead, roughly 0.8 watts per square meter stays on Earth.
• The energy is trapped by greenhouse gases, such as water vapor and carbon dioxide, that come from burning fossil fuels and other sources. Clouds also play a role; they reflect sunlight back into space and, depending on their height and thickness, prevent it from leaving the planet.
• The imbalance helps explain why global temperatures increased 1.4 degrees since last century and sea levels are rising, Loeb said.
• NASA uses computer models to summarize the images into daily and monthly reports that date back to 1985. That's when another Langley instrument, ERBE, or Earth Radiation Budget Experiment, began monitoring the planet.
• Four successive CERES instruments — the one launched in October is a fifth generation — followed, providing data used by, among others, the Intergovernmental Panel on Climate Change.
• The panel, which shared a Nobel Peace Prize with former Vice President Al Gore, wrote what many view as the definitive report on climate change.
CERES has four main objectives:
1. For climate change analysis, provide a continuation of the ERBE record of radiative fluxes at the top of the atmosphere (TOA), analyzed using the same algorithms that produced the ERBE data.
2. Double the accuracy of estimates of radiative fluxes at TOA and the Earth's surface.
3. Provide the first long-term global estimates of the radiative fluxes within the Earth's atmosphere.
4. Provide cloud property estimates that are consistent with the radiative fluxes from surface to TOA.
• scanning thermistor bolometer sensors which measure Earth-reflected and Earth- emitted filtered radiances in the broadband shortwave (0.3 μm - 5.0 μm), broadband total-wave (0.3 μm - >100 μm), and narrow-band water vapor window (8 μm-12 μm) spectral regions.
We see that CERES is used to record radiative heat fluxes using sophisticated technology with the ambition to discover effects of global warming by CO2 as an radiative imbalance. To the disappointment of the designers and users of CERES including IPCC Al Gore, the instruments show next to nothing: An imbalance of 0.8 W/m2 out of 340 W/m2 is smaller than any thinkable measurement error.
CERES (like LHC) thus discovers nothing, and what we can now expect is a new bigger project with more sensitive instruments with doubled accuracy with the hope of finding something which is not zero as evidence of global warming by CO2.
This will not be cheap, but when you go to an expensive restaurant you expect to be offered something special, and so the high price and sophistication of the instrumentation can be seen as a guarantee that something will be discovered. |
ef7a371ca17e31e9 | Quantum Mechanics: Wavepackets
View Series
Slides/Notes podcast
Licensed according to this deed.
Published on
In physics, a wave packet is an envelope or packet containing an arbitrary number of wave forms. In quantum mechanics the wave packet is ascribed a special significance: it is interpreted to be a "probability wave" describing the probability that a particle or particles in a particular state will be measured to have a given position and momentum.
By applying the Schrödinger equation in quantum mechanics it is possible to deduce the time evolution of a system, similar to the process of the Hamiltonian formalism in classical mechanics. The wave packet is a mathematical solution to the Schrödinger equation. The square of the area under the wave packet solution is interpreted to be the probability density of finding the particle in a region.
In the coordinate representation of the wave (such as the Cartesian coordinate system) the position of the wave is given by the position of the packet. Moreover, the narrower the spatial wave packet, and therefore the better defined the position of the wave packet, the larger the spread in the momentum of the wave. This trade-off between spread in position and spread in momentum is one example of the Heisenberg uncertainty principle.
• Wavepackets Description
• Homework Assignment on Wavepackets
• Sponsored by
Cite this work
Researchers should cite this work as follows:
• www.eas.asu.edu/~vasilesk
• Dragica Vasileska; Gerhard Klimeck (2008), "Quantum Mechanics: Wavepackets," http://nanohub.org/resources/4932.
BibTex | EndNote
In This Series
1. Reading Material: Wavepackets
2. Homework Assignment: Wavepackets
|
3808d65b508942ed | La face quantique de la gravité
La physique théorique est à la croisée des chemins, et nul ne sait pour l’instant ce qui se trouve au-delà de la relativité générale ou du Modèle standard. Il est admis que nous ne pourrons progresser qu’avec une théorie plus complète de la gravité quantique, qui unifierait peut-être la gravité avec les autres interactions fondamentales de la nature. Or, après plus de 40 ans d’un effort intellectuel collectif sans précédent, les approches de la gravité quantique sont toujours plus diversifiées et aucune convergence n’est en vue. Si nous voulons sortir un jour de cette impasse, nous devrons nous inspirer des prouesses historiques d’Einstein.
There is little doubt that, in spite of their overwhelming success in describing phenomena over a vast range of distances, general relativity (GR) and the Standard Model (SM) of particle physics are incomplete theories. Concerning the SM, the problem is often cast in terms of the remaining open issues in particle physics, such as its failure to account for the origin of the matter–antimatter asymmetry or the nature of dark matter. But the real problem with the SM is theoretical: it is not clear whether it makes sense at all as a theory beyond perturbation theory, and these doubts extend to the whole framework of quantum field theory (QFT) (with perturbation theory as the main tool to extract quantitative predictions). The occurrence of “ultraviolet” (UV) divergences in Feynman diagrams, and the need for an elaborate mathematical procedure called renormalisation to remove these infinities and make testable predictions order-by-order in perturbation theory, strongly point to the necessity of some other and more complete theory of elementary particles.
On the GR side, we are faced with a similar dilemma. Like the SM, GR works extremely well in its domain of applicability and has so far passed all experimental tests with flying colours, most recently and impressively with the direct detection of gravitational waves (see "General relativity at 100"). Nevertheless, the need for a theory beyond Einstein is plainly evident from the existence of space–time singularities such as those occurring inside black holes or at the moment of the Big Bang. Such singularities are an unavoidable consequence of Einstein’s equations, and the failure of GR to provide an answer calls into question the very conceptual foundations of the theory.
Unlike quantum theory, which is rooted in probability and uncertainty, GR is based on notions of smoothness and geometry and is therefore subject to classical determinism. Near a space–time singularity, however, the description of space–time as a continuum is expected to break down. Likewise, the assumption that elementary particles are point-like, a cornerstone of QFT and the reason for the occurrence of ultraviolet infinities in the SM, is expected to fail in such extreme circumstances. Applying conventional particle-physics wisdom to Einstein’s theory by quantising small fluctuations of the metric field (corresponding to gravitational waves) cannot help either, since it produces non-renormalisable infinities that undermine the predictive power of perturbatively quantised GR.
In the face of these problems, there is a wide consensus that the outstanding problems of both the SM and GR can only be overcome by a more complete and deeper theory: a theory of quantum gravity (QG) that possibly unifies gravity with the other fundamental interactions in nature. But how are we to approach this challenge?
Planck-scale physics
Unlike with quantum mechanics, whose development was driven by the need to explain observed phenomena such as the existence of spectral lines in atomic physics, nature gives us very few hints of where to look for QG effects. One main obstacle is the sheer smallness of the Planck length, of the order 10−33 cm, which is the scale at which QG effects are expected to become visible (conversely, in terms of energy, the relevant scale is 1019 GeV, which is 15 orders of magnitude greater than the energy range accessible to the LHC). There is no hope of ever directly measuring genuine QG effects in the laboratory: with zillions of gravitons in even the weakest burst of gravitational waves, realising the gravitational analogue of the photoelectric effect will forever remain a dream.
One can nevertheless speculate that QG might manifest itself indirectly, for instance via measurable features in the cosmic microwave background, or cumulative effects originating from a more granular or “foamy” space–time. Alternatively, perhaps a framework will emerge that provides a compelling explanation for inflation, dark energy and the origin of the universe. Although not completely hopeless, available proposals typically do not allow one to unambiguously discriminate between very different approaches, for instance when contrarian schemes like string theory and loop quantum gravity vie to explain features of the early universe. And even if evidence for new effects was found in, say, cosmic-ray physics, these might very well admit conventional explanations.
In the search for a consistent theory of QG, it therefore seems that we have no other choice but to try to emulate Einstein’s epochal feat of creating a new theory out of purely theoretical considerations.
Emulating Einstein
Yet, after more than 40 years of unprecedented collective intellectual effort, different points of view have given rise to a growing diversification of approaches to QG – with no convergence in sight. It seems that theoretical physics has arrived at crossroads, with nature remaining tight-lipped about what comes after Einstein and the SM. There is currently no evidence whatsoever for any of the numerous QG schemes that have been proposed – no signs of low-energy supersymmetry, large extra dimensions or “stringy” excitations have been seen at the LHC so far. The situation is no better for approaches that do not even attempt to make predictions that could be tested at the LHC.
Existing approaches to QG fall roughly into two categories, reflecting a basic schism that has developed in the community. One is based on the assumption that Einstein’s theory can stand on its own feet, even when confronted with quantum mechanics. This would imply that QG is nothing more than the non-perturbative quantisation of Einstein’s theory and that GR, suitably treated and eventually complemented by the SM, correctly describes the physical degrees of freedom also at the very smallest distances. The earliest incarnation of this approach goes back to the pioneering work of John Wheeler and Bryce DeWitt in the early 1960s, who derived a GR analogue of the Schrödinger equation in which the “wave function of the universe” encodes the entire information about the universe as a quantum system. Alas, the non-renormalisable infinities resurface in a different guise: the Wheeler–DeWitt equation is so ill-defined mathematically that no one until now has been able to make sense of it beyond mere heuristics. More recent variants of this approach in the framework of loop quantum gravity (LQG), spin foams and group field theory replace the space–time metric by new variables (Ashtekar variables, or holonomies and fluxes) in a renewed attempt to overcome the mathematical difficulties.
The opposite attitude is that GR is only an effective low-energy theory arising from a more fundamental Planck-scale theory, whose basic degrees of freedom are very different from GR or quantum field theory. In this view, GR and space–time itself are assumed to be emergent, much like macroscopic physics emerges from the quantum world of atoms and molecules. The perceived need to replace Einstein’s theory by some other and more fundamental theory, having led to the development of supersymmetry and supergravity, is the basic hypothesis underlying superstring theory (see "The many lives of supergravity"). Superstring theory is the leading contender for a perturbatively finite theory of QG, and widely considered the most promising possible pathway from QG to SM physics. This approach has spawned a hugely varied set of activities and produced many important ideas. Most notable among these, the AdS/CFT correspondence posits that the physics that takes place in some volume can be fully encoded in the surface bounding that volume, as for a hologram, and consequently that QG in the bulk should be equivalent to a pure quantum field theory on its boundary.
Apart from numerous technical and conceptual issues, there remain major questions for all approaches to QG. For LQG-like or “canonical” approaches, the main unsolved problems concern the emergence of classical space–time and the Einstein field equations in the semiclassical limit, and their inability to recover standard QFT results such as anomalies. On the other side, a main shortcoming is the “background dependence” of the quantisation procedure, for which both supergravity and string theory have to rely on perturbative expansions about some given space–time background geometry. In fact, in its presently known form, string theory cannot even be formulated without reference to a specific space–time background.
These fundamentally different viewpoints also offer different perspectives on how to address the non-renormalisability of Einstein’s theory, and consequently on the need (or not) for unification. Supergravity and superstring theory try to eliminate the infinities of the perturbatively quantised theory, in particular by including fermionic matter in Einstein’s theory, thus providing a raison d’être for the existence of matter in the world. They therefore automatically arrive at some kind of unification of gravity, space–time and matter. By contrast, canonical approaches attribute the ultraviolet infinities to basic deficiencies of the perturbative treatment. However, to reconcile this view with semiclassical gravity, they will have to invoke some mechanism – a version of Weinberg’s asymptotic safety – to save the theory from the abyss of non-renormalisability.
Conceptual challenges
Beyond the mathematical difficulties to formulating QG, there are a host of issues of a more conceptual nature that are shared by all approaches. Perhaps the most important concerns the very ground rules of quantum mechanics: even if we could properly define and solve the Wheeler–DeWitt equation, how are we to interpret the resulting wave function of the universe? After all, the latter pretends to describe the universe in its entirety, but in the absence of outside classical observers, the Copenhagen interpretation of quantum mechanics clearly becomes untenable. On a slightly less grand scale, there are also unresolved issues related to the possible loss of information in connection with the Hawking evaporation of black holes.
A further question that any theory of QG must eventually answer concerns the texture of space–time at the Planck scale: do there exist “space–time atoms” or, more specifically, web-like structures like spin networks and spin foams, as claimed by LQG-like approaches? (see diagram) Or does the space–time continuum get dissolved into a gas of strings and branes, as suggested by some variants of string theory, or emerge from holographic entanglement, as advocated by AdS/CFT aficionados? There is certainly no lack of enticing ideas, but without a firm guiding principle and the prospect of making a falsifiable prediction, such speculations may well end up in the nirvana of undecidable propositions and untestable expectations.
Why then consider unification? Perhaps the strongest argument in favour of unification is that the underlying principle of symmetry has so far guided the development of modern physics from Maxwell’s theory to GR all the way to Yang–Mills theories and the SM (see diagram). It is therefore reasonable to suppose that unification and symmetry may also point the way to a consistent theory of QG. This point of view is reinforced by the fact that the SM, although only a partially unified theory, does already afford glimpses of trans-Planckian physics, independently of whether new physics shows up at the LHC or not. This is because the requirements of renormalisability and vanishing gauge anomalies put very strong constraints on the particle content of the SM, which are indeed in perfect agreement with what we see in detectors. There would be no more convincing vindication of a theory of QG than its ability to predict the matter content of the world (see panel below).
In search of SUSY
Among the promising ideas that have emerged over the past decades, arguably the most beautiful and far reaching is supersymmetry. It represents a new type of symmetry that relates bosons and fermions, thus unifying forces (mediated by vector bosons) with matter (quarks and leptons), and which endows space–time with extra fermionic dimensions. Supersymmetry is very natural from the point of view of cancelling divergences because bosons and fermions generally contribute with opposite signs to loop diagrams. This aspect means that low-energy (N = 1) supersymmetry can stabilise the electroweak scale with regard to the Planck scale, thereby alleviating the so-called hierarchy problem via the cancellation of quadratic divergences. These models predict the existence of a mirror world of superpartners that differ from the SM particles only by their opposite statistics (and their mass), but otherwise have identical internal quantum numbers.
Then again, perhaps supersymmetry is not the end of the story. There is plenty of evidence that another type of symmetry may be equally important, namely duality symmetry. The first example of such a symmetry, electromagnetic duality, was discovered by Dirac in 1931. He realised that Maxwell’s equations in vacuum are invariant under rotations of the electric and magnetic fields into one another – an insight that led him to predict the existence of magnetic monopoles. While magnetic monopoles have not been seen, duality symmetries have turned out to be ubiquitous in supergravity and string theory, and they also reveal a fascinating and unsuspected link with the so-called exceptional Lie groups.
More recently, hints of an enormous symmetry enhancement have also appeared in a completely different place, namely the study of cosmological solutions of Einstein’s equations near a space-like singularity. This mathematical analysis has revealed tantalising evidence of a truly exceptional infinite-dimensional duality symmetry, which goes by the name of E10, and which “opens up” as one gets close to the cosmological (Big Bang) singularity (see image at top). Could it be that the near-singularity limit can tell us about the underlying symmetries of QG in a similar way as the high-energy limit of gauge theories informs us about the symmetries of the SM? One can validly argue that this huge and monstrously complex symmetry knows everything about maximal supersymmetry and the finite-dimensional dualities identified so far. Equally important, and unlike conventional supersymmetry, E10 may continue to make sense in the Planck regime where conventional notions of space and time are expected to break down. For this reason, duality symmetry could even supersede supersymmetry as a unifying principle.
Outstanding questions
Our summary, then, is very simple: all of the important questions in QG remain wide open, despite a great deal of effort and numerous promising ideas. In the light of this conclusion, the LHC will continue to play a crucial role in advancing our understanding of how everything fits together, no matter what the final outcome of the experiments will be. This is especially true if nature chooses not to abide by current theoretical preferences and expectations.
Over the past decades, we have learnt that the SM is a most economical and tightly knit structure, and there is now mounting evidence that minor modifications may suffice for it to survive to the highest energies. To look for such subtle deviations will therefore be a main task for the LHC in the years ahead. If our view of the Planck scale remains unobstructed by intermediate scales, the popular model-builders’ strategy of adding ever more unseen particles and couplings may come to an end. In that case, the challenge of explaining the structure of the low-energy world from a Planck-scale theory of quantum gravity looms larger than ever.
Einstein on unification
It is well known that Albert Einstein spent much of the latter part of his life vainly searching for unification, although disregarding the nuclear forces and certainly with no intention of reconciling quantum mechanics and GR. Already in 1929, he published a paper on the unified theory (pictured above right, click to enlarge). In this paper, he states with wonderful and characteristic lucidity what the criteria should be of a “good” unified theory: to describe as far as possible all phenomena and their inherent links, and to do so on the basis of a minimal number of assumptions and logically independent basic concepts. The second of these goals (also known as the principle of Occam’s razor) refers to “logical unity”, and goes on to say: “Roughly but truthfully, one might say: we not only want to understand how nature works, but we are also after the perhaps utopian and presumptuous goal of understanding why nature is the way it is and not otherwise.” |
157a6619e7ee2b1f | Geometrical optics
Last updated
Geometrical optics, or ray optics, is a model of optics that describes light propagation in terms of rays . The ray in geometric optics is an abstraction useful for approximating the paths along which light propagates under certain circumstances.
The simplifying assumptions of geometrical optics include that light rays:
As light travels through space, it oscillates in amplitude. In this image, each maximum amplitude crest is marked with a plane to illustrate the wavefront. The ray is the arrow perpendicular to these parallel surfaces. Plane wave wavefronts 3D.svg
A light ray is a line or curve that is perpendicular to the light's wavefronts (and is therefore collinear with the wave vector). A slightly more rigorous definition of a light ray follows from Fermat's principle, which states that the path taken between two points by a ray of light is the path that can be traversed in the least time. [1]
Diagram of specular reflection Reflection angles.svg
Diagram of specular reflection
Illustration of Snell's Law Snells law.svg
Illustration of Snell's Law
Refraction occurs when light travels through an area of space that has a changing index of refraction. The simplest case of refraction occurs when there is an htryh nn43 53 5 5 5between a uniform medium with index of refraction and another medium with index of refraction . In such situations, Snell's Law describes the resulting deflection of the light ray:
where and are the wave velocities through the respective media. [3]
A ray tracing diagram for a simple converging lens. Lens3b.svg
A ray tracing diagram for a simple converging lens.
Incoming parallel rays are focused by a convex lens into an inverted real image one focal length from the lens, on the far side of the lens 2015-05-25 0820Incoming parallel rays are focused by a convex lens into an inverted real image one focal length from the lens, on the far side of the.png
With concave lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at an upright virtual image one focal length from the lens, on the same side of the lens that the parallel rays are approaching on. 2015-05-25 0836With concave lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at an.png
Likewise, the magnification of a lens is given by
Underlying mathematics
As a mathematical study, geometrical optics emerges as a short-wavelength limit for solutions to hyperbolic partial differential equations (Sommerfeld–Runge method) or as a property of propagation of field discontinuities according to Maxwell's equations (Luneburg method). In this short-wavelength limit, it is possible to approximate the solution locally by
where satisfy a dispersion relation, and the amplitude varies slowly. More precisely, the leading order solution takes the form
The phase can be linearized to recover large wavenumber , and frequency . The amplitude satisfies a transport equation. The small parameter enters the scene due to highly oscillatory initial conditions. Thus, when initial conditions oscillate much faster than the coefficients of the differential equation, solutions will be highly oscillatory, and transported along rays. Assuming coefficients in the differential equation are smooth, the rays will be too. In other words, refraction does not take place. The motivation for this technique comes from studying the typical scenario of light propagation where short wavelength light travels along rays that minimize (more or less) its travel time. Its full application requires tools from microlocal analysis.
Sommerfeld–Runge method
The method of obtaining equations of geometrical optics by taking the limit of zero wavelength was first described by Arnold Sommerfeld and J. Runge in 1911. [6] Their derivation was based on an oral remark by Peter Debye. [7] [8] Consider a monochromatic scalar field , where could be any of the components of electric or magnetic field and hence the function satisfy the wave equation
where with being the speed of light in vacuum. Here, is the refractive index of the medium. Without loss of generality, let us introduce to convert the equation to
Since the underlying principle of geometrical optics lies in the limit , the following asymptotic series is assumed,
For large but finite value of , the series diverges, and one has to be careful in keeping only appropriate first few terms. For each value of , one can find an optimum number of terms to be kept and adding more terms than the optimum number might result in a poorer approximation. [9] Substituting the series into the equation and collecting terms of different orders, one finds
in general,
The first equation is known as the eikonal equation , which determines the eikonal is a Hamilton–Jacobi equation, written for example in Cartesian coordinates becomes
The remaining equations determine the functions .
Luneburg method
The method of obtaining equations of geometrical optics by analysing surfaces of discontinuities of solutions to Maxwell's equations was first described by Rudolf Karl Luneburg in 1944. [10] It does not restrict the electromagnetic field to have a special form (in the Sommerfeld-Runge method it is not clear that a field whose amplitude is made to depend on would still yield the eikonal equation, i.e., a geometrical optics wave front). The main conclusion of this approach is the following:
Theorem. Suppose the fields and (in a linear isotropic medium described by dielectric constants and ) have finite discontinuities along a (moving) surface in described by the equation . Then Maxwell's equations in the integral form imply that satisfies the eikonal equation:
where is the index of refraction of the medium (Gaussian units).
An example of such surface of discontinuity is the initial wave front emanating from a source that starts radiating at a certain instant of time.
The surfaces of field discontinuity thus become geometrical optics wave fronts with the corresponding geometrical optics fields defined as:
Those fields obey transport equations consistent with the transport equations of the Sommerfeld-Runge approach. Light rays in Luneburg's theory are defined as trajectories orthogonal to the discontinuity surfaces and with the right parametrisation they can be shown to obey Fermat's principle of least time thus establishing the identity of those rays with light rays of standard optics.
The above developments can be generalised to anisotropic media. [11]
The proof of Luneburg's theorem is based on investigating how Maxwell's equations govern the propagation of discontinuities of solutions. The basic technical lemma is as follows:
A technical lemma. Let be a hypersurface (a 3-dimensional manifold) in spacetime on which one or more of: , , , , have a finite discontinuity. Then at each point of the hypersurface the following formulas hold:
where the operator acts in the -space (for every fixed ) and the square brackets denote the difference in values on both sides of the discontinuity surface (set up according to an arbitrary but fixed convention, e.g. the gradient pointing in the direction of the quantities being subtracted from).
Sketch of proof. Start with Maxwell's equations away from the sources (Gaussian units):
Using Stokes' theorem in one can conclude from the first of the above equations that for any domain in with a piecewise smooth boundary the following is true:
where is the projection of the outward unit normal of onto the 3D slice , and is the volume 3-form on . Similarly, one establishes the following from the remaining Maxwell's equations:
Now by considering arbitrary small sub-surfaces of and setting up small neighbourhoods surrounding in , and subtracting the above integrals accordingly, one obtains:
where denotes the gradient in the 4D -space. And since is arbitrary, the integrands must be equal to 0 which proves the lemma.
It's now easy to show that as they propagate through a continuous medium, the discontinuity surfaces obey the eikonal equation. Specifically, if and are continuous, then the discontinuities of and satisfy: and . In this case the first two equations of the lemma can be written as:
Taking cross product of the first equation with and substituting the second yields:
By the second of Maxwell's equations, , hence, for points lying on the surface only:
(Notice the presence of the discontinuity is essential in this step as we'd be dividing by zero otherwise.)
Because of the physical considerations one can assume without loss of generality that is of the following form: , i.e. a 2D surface moving through space, modelled as level surfaces of . (Mathematically exists if by the implicit function theorem.) The above equation written in terms of becomes:
which is the eikonal equation and it holds for all , , , since the variable is absent. Other laws of optics like Snell's law and Fresnel formulae can be similarly obtained by considering discontinuities in and .
General equation using four-vector notation
In four-vector notation used in special relativity, the wave equation can be written as
and the substitution leads to [12]
Therefore the eikonal equation is given by
Once eikonal is found by solving the above equation, the wave four-vector can be found from
See also
Related Research Articles
Laplaces equation Second order partial differential equation
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as
Navier–Stokes equations Equations describing the motion of viscous fluid substances
In physics, the Navier–Stokes equations are certain partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Anglo-Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
Noethers theorem Statement relating differentiable symmetries to conserved quantities
Noether's theorem or Noether's first theorem states that every differentiable symmetry of the action of a physical system with conservative forces has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918, after a special case was proven by E. Cosserat and F. Cosserat in 1909. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries over physical space.
The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , , or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).
Poissons equation Expression frequently encountered in mathematical physics, generalization of Laplaces equation.
The Klein–Gordon equation is a relativistic wave equation, related to the Schrödinger equation. It is second-order in space and time and manifestly Lorentz-covariant. It is a quantized version of the relativistic energy–momentum relation. Its solutions include a quantum scalar or pseudoscalar field, a field whose quanta are spinless particles. Its theoretical relevance is similar to that of the Dirac equation. Electromagnetic interactions can be incorporated, forming the topic of scalar electrodynamics, but because common spinless particles like the pions are unstable and also experience the strong interaction the practical utility is limited.
In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.
Path integral formulation Formulation of quantum mechanics
The path integral formulation is a description in quantum mechanics that generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is particularly useful in identifying conserved quantities for mechanical systems, which may be possible even when the mechanical problem itself cannot be solved completely.
In differential geometry, the four-gradient is the four-vector analogue of the gradient from vector calculus.
In electromagnetism, the Lorenz gauge condition or Lorenz gauge, for Ludvig Lorenz, is a partial gauge fixing of the electromagnetic vector potential by requiring The name is frequently confused with Hendrik Lorentz, who has given his name to many concepts in this field. The condition is Lorentz invariant. The condition does not completely determine the gauge: one can still make a gauge transformation where is a harmonic scalar function. The Lorenz condition is used to eliminate the redundant spin-0 component in the (1/2, 1/2) representation theory of the Lorentz group. It is equally used for massive spin-1 fields where the concept of gauge transformations does not apply at all.
In quantum mechanics, the momentum operator is the operator associated with the linear momentum. The momentum operator is, in the position representation, an example of a differential operator. For the case of one particle in one spatial dimension, the definition is:
The following are important identities involving derivatives and integrals in vector calculus.
Mathematical descriptions of the electromagnetic field Formulations of electromagnetism
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.
The intent of this article is to highlight the important points of the derivation of the Navier–Stokes equations as well as its application and formulation for different families of fluids.
Gravitational lensing formalism
In general relativity, a point mass deflects a light ray with impact parameter by an angle approximately equal to
In fluid dynamics, the Oseen equations describe the flow of a viscous and incompressible fluid at small Reynolds numbers, as formulated by Carl Wilhelm Oseen in 1910. Oseen flow is an improved description of these flows, as compared to Stokes flow, with the (partial) inclusion of convective acceleration.
Matrix representation of Maxwells equations
In electromagnetism, a branch of fundamental physics, the matrix representations of the Maxwell's equations are a formulation of Maxwell's equations using matrices, complex numbers, and vector calculus. These representations are for a homogeneous medium, an approximation in an inhomogeneous medium. A matrix representation for an inhomogeneous medium was presented using a pair of matrix equations. A single equation using 4 × 4 matrices is necessary and sufficient for any homogeneous medium. For an inhomogeneous medium it necessarily requires 8 × 8 matrices.
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.
1. Arthur Schuster, An Introduction to the Theory of Optics, London: Edward Arnold, 1904 online.
2. Greivenkamp, John E. (2004). Field Guide to Geometrical Optics. SPIE Field Guides. Vol. 1. SPIE. pp. 19–20. ISBN 0-8194-5294-7.
3. 1 2 3 4 5 6 7 Hugh D. Young (1992). University Physics 8e . Addison-Wesley. ISBN 0-201-52981-5. Chapter 35.
4. E. W. Marchand, Gradient Index Optics, New York, NY, Academic Press, 1978.
6. Sommerfeld, A., & Runge, J. (1911). Anwendung der Vektorrechnung auf die Grundlagen der geometrischen Optik. Annalen der Physik, 340(7), 277-298.
7. Born, M., & Wolf, E. (2013). Principles of optics: electromagnetic theory of propagation, interference and diffraction of light. Elsevier.
9. Borowitz, S. (1967). Fundamentals of quantum mechanics, particles, waves, and wave mechanics.
10. Luneburg, R. K., Methematical Theory of Optics, Brown University Press 1944 [mimeographed notes], University of California Press 1964
11. Kline, M., Kay, I. W., Electromagnetic Theory and Geometrical Optics, Interscience Publishers 1965
12. Landau, L. D., & Lifshitz, E. M. (1975). The classical theory of fields.
Further reading
English translations of some early books and papers |
8779a67fd80f6794 | ← Quantum Mechanics
The Simple Harmonic Oscillator
Sunday, February 27, 2022
What is an oscillator?
A classical oscillator is an object of mass mm attached to a spring of force constant kk. The spring exerts a restoring force F=kxF=-kx on the object, where xx is the displacement from the equilibrium position.
Harmonic oscillators have an angular frequency ω0=k/m\omega_0=\sqrt{k/m} and period T=2πm/kT=2\pi\sqrt{m/k}. Their amplitude is found by their maximum displacement, x0x_0, and their maximum kinetic energies occur at the turning points x=±x0x=\pm x_0. Therefore, the motion is confined to x0<x<+x0-x_0\lt x\lt +x_0.
The Quantum Mechanical Version of a Harmonic Oscillator
Although no natural example of a one-dimensional quantum oscillator exist, there are plenty of systems that behave approximately like oscillators. For instance, a vibrating diatomic molecule.
A force F=kxF=-kx has potential energy U=12kx2U=\frac{1}{2}kx^2, meaning
There are no boundaries in this situation, but we know that as x+x\rightarrow +\infty and xx\rightarrow -\infty, ψ0\psi\rightarrow 0. The simplest function that satisfies this condition is ψ(x)=Aeax2\psi\left(x\right)=Ae^{-ax^2}, where the constant aa and energy EE can be found by finding the first and second derivatives of the wave function:
dψdx=2ax(Aeax2)d2ψdx2=2a(Aeax2)2ax(2ax)Aeax2=(2a+4a2x2)Aeax2\frac{d\psi}{dx}=-2ax\left(Ae^{-ax^2}\right)\newline \frac{d^2\psi}{dx^2}=-2a\left(Ae^{-ax^2}\right)-2ax\left(-2ax\right)Ae^{-ax^2}=\left(-2a+4a^2x^2\right)Ae^{-ax^2}
Plugging this into the Schrödinger equation from above,
Instead of solving for xx, we are trying to make this true for all values of xx by finding constants that make that true. For this to be true,
2a22m+12k=02am=E-\frac{2a^2\hbar^2}{m}+\frac{1}{2}k=0\newline \frac{\hbar^2a}{m}=E
Which results in
a=km2E=12k/ma=\frac{\sqrt{km}}{2\hbar}\newline E=\frac{1}{2}\hbar\sqrt{k/m}
Using the equation for the angular frequency of a classical harmonic oscillator, E=12ω0E=\frac{1}{2}\hbar\omega_0.
The coefficient AA can be found using the normalization condition, and the result for the ground state wave function is A=(mω0/π)1/4A=\left(m\omega_0/\hbar\pi\right)^{1/4}. Therefore, the complete wave function is
Note that the wave function penetrates into the forbidden region (beyond x=±x0x=\pm x_0), whereas the classical oscillator does not.
The solution above only works for the ground-state. The more general solution is ψn(x)=Afn(x)eax2\psi_n\left(x\right)=Af_n\left(x\right)e^{-ax^2}, where fn(x)f_n\left(x\right) is a polynomial in which the highest power of xx is xnx_n. The energies are
En=(n+12)ω0 n=0,1,2,...E_n=\left(n+\frac{1}{2}\right)\hbar\omega_0~~~~~~~~n=0,1,2,...
Probability distributions
Below are a few examples of what probability densities look like for harmonic oscillators:
Harmonic oscillator
The resulting uncertainties for this situation are as follows:
Δx=/2mω0Δp=ω0m/2\Delta x=\sqrt{\hbar/2m\omega_0}\newline\Delta p=\sqrt{\hbar\omega_0m/2}
And the product of the uncertainties is ΔxΔp=/2\Delta x\Delta p=\hbar/2, meaning the uncertainty is at a minimum (called "compact"). |
8359c642f3593e0e | Aharonov–Bohm effect - electric effect
The Aharonov–Bohm effect, sometimes called the Ehrenberg–Siday–Aharonov–Bohm effect, is a quantum mechanical phenomenon in which an electrically charged particle is affected by an electromagnetic potential (V, A), despite being confined to a region in which both the magnetic field B and electric field E are zero. The underlying mechanism is the coupling of the electromagnetic potential with the complex phase of a charged particle’s wave function, and the Aharonov–Bohm effect is accordingly illustrated by interference experiments.
The magnetic Aharonov–Bohm effect can be seen as a result of the requirement that quantum physics be invariant with respect to the gauge choice for the electromagnetic potential, of which the magnetic vector potential A forms part.
From the Schrödinger equation, the phase of an eigenfunction with energy E goes as exp(-iEt/h). The energy, however, will depend upon the electrostatic potential V for a particle with charge q. In particular, for a region with constant potential V (zero field), the electric potential energy qV is simply added to E, resulting in a phase shift calculated by the shown formula
Related formulas
Δϕphase shift (dimensionless)
qparticle charge (C)
Vparticle electrostatic potential (volt)
ttime spent in the potential (s)
hPlanck constant |
d0b02cf6519a36d8 | As a boy, I was a rock hound, and I learned how to identify minerals with the Mohs hardness test, named after the mineralogist who invented it. You take a known specimen, like quartz, and scratch an unknown specimen with it. If the quartz scratches the mystery specimen, you know it’s softer than quartz. It could be calcite or pyrite. If the quartz can’t scratch the specimen, it might be beryl or corundum, which are harder than quartz. Along with factors like color and crystalline structure, the hardness test can help you specify your specimen.
I loved the straightforward objectivity of the Mohs test. Recently, I’ve been brooding over a hardness—call it cognitive hardness—that is much harder to evaluate. Over the course of our lives, we face an enormous variety of cognitively hard tasks. For the past year, for example, I’ve been studying quantum mechanics, which is notoriously difficult to grasp. But is learning quantum mechanics harder, objectively, than chatting to my girlfriend about #MeToo without irritating her? Or talking to my daughter about climate change without depressing her?
Subjective assessments of cognitive hardness aren’t much help, because they vary with each person’s experience and aptitude. You’re a whiz with differential equations, I’m better at riffing on Emily Dickinson’s poems. Is there a method, analogous to the Mohs test, for quantifying and hence ranking the cognitive hardness of various tasks? Such a method, perhaps, could yield insights that help us solve hard problems, or, conversely, accept their insolubility. At any rate, here are a few thoughts on cognitive hardness.
Mathematicians and computer scientists rank problems by how long it would take a computer to find a solution. Problems are defined as NP-hard if there is no algorithmic shortcut to the best solution; you must laboriously check every possible solution to find the best one. (NP stands for “nondeterministic polynomial time,” which I’ve always thought of as meaning “really, really.”)
[Post-publication note: Computer scientist Scott Aaronson objects to my description of NP-Hardness and defines it as follows: “NP is the class of problems for which ‘yes’ answers can be efficiently verified given a proof or witness. NP-hard is the class of problems where, if you had a magic box for solving them, it would let you solve every NP problem in polynomial time. Many problems are exponentially hard without being NP-hard. And if P=NP, there would be NP-hard problems (including traveling salesman) that weren’t ‘hard’ in the plain English sense. Other problems, like perfectly playing a generalization of chess to N*N boards, are known to be exponentially hard unconditionally.”]
One famous NP-hard problem involves a traveling salesman seeking the shortest route between many cities. The hardness of the problem balloons dramatically with the number of cities. If the salesman has to visit 15 cities, he has 87 billion possible routes to consider. Mathematicians have devised tricks for finding pretty short routes—if not the shortest—between many cities. But when the number of cities rises into the thousands, the world’s fastest computer would take virtually forever to find the shortest route.
Ironically, coming up with a time-saving itinerary is easy compared to other problems that the traveling salesman might face. For example: How long can he be on the road without endangering his marriage? If he is lonely, should he approach a woman in the hotel bar? If he feels bad about cheating on his wife, what should he tell himself to relieve his guilt?
What makes these problems especially hard is their moral dimension. Like most of us, the traveling salesman wants to believe he is a good person, but what does that even mean? In 2016 I attended a conference that explored whether artificial intelligence can solve ethical dilemmas. There were lots of droll variations on the trolley problem. For example, would you destroy a living thing, like a sparrow, to save a nonliving thing, like the Grand Canyon?
But philosophers have been arguing about morality for millennia without agreeing on what our moral rules should be. The famous play Death of a Salesman explores the moral dilemmas of a traveling salesman. Like most works of literature, Death of a Salesman does not solve moral problems; it rubs our faces in them.
So-called complexity researchers equate hardness with complexity. Let’s say you are a scientist trying to model and hence explain some complicated phenomenon, like the propagation of gravitational waves from colliding blacks holes, or the spread of disinformation on social media. The hardness of your scientific problem, researchers suggest, is proportional to the complexity of the phenomenon you want to understand.
Moreover, dissimilar things might be complex, and hence hard to explain, for similar reasons. Ideally, modeling one hard phenomenon will yield insights that apply to very different ones. A better model of black holes might lead to a better model and deeper understanding of QAnon. Or so researchers hope.
Unfortunately, researchers cannot agree on a definition of complexity, which is crucial to their enterprise. Physicist Seth Lloyd has listed dozens of proposed definitions of complexity, based on information theory, thermodynamics, fractals and other measures. There are many definitions because none really suffices. I suspect that, just as unhappy families are unhappy in their own ways, different hard problems are hard for different reasons.
Some physicists insist that everything, including humanity, is ultimately explicable in terms of particles pushed and pulled by gravity, electromagnetism and other forces. Sabine Hossenfelder takes this position in a recent conversation with me. But physics has nothing to say about morality, meaning, emotions, choices and other significant features of human existence.
When I am struggling to understand the mathematical rules underpinning quantum mechanics, they often seem irritatingly arcane and arbitrary. Actually, the rules of calculus and linear algebra are quite sensible compared to the “rules” of ordinary language. To master English, you must first learn the letters of alphabet. Letters only acquire meaning when combined into words, and there are thousands of words, many of which have multiple meanings. Consider all the meanings of “hard.”
Then you have all the rules for combining words into sentences, rules that are routinely bent and broken. The meaning of a sentence depends, again, according to rules that are hard to spell out precisely, on the context in which it is uttered and heard. Linguist Noam Chomsky has convinced most scientists that we have an innate talent for language, inherited from our ancestors; that’s why we learn language so quickly.
Sometimes, when I am engaged in conversation, my language instinct kicks in, and I chat with relative ease. I am displaying what philosopher Daniel Dennett calls “competence without comprehension.” Other times, I struggle to decipher the words of the person speaking to me, and I am overwhelmed by all the possible ways in which I can respond. This often happens when I am speaking to my daughter or son.
Becoming a father is, in the strict, biological sense, easy. Almost any idiot can do it. But what does it mean to be a good father? The answer varies across eras and cultures. My son and daughter are 28 and 26, and I’m still baffled by fatherhood. Almost every time I see my kids or talk to them over the phone, I second-guess myself afterward. Did I share too much? Not enough?
You can assess parents by looking at how their kids have done. But I know good parents (caring, well-intentioned) whose kids have died of drug overdoses, and I know bad parents (self-absorbed to the point of negligence) whose kids have thrived. These brutal facts have an upside: If your kids don’t turn out well, you can always blame bad luck. My larger point: Unlike the Schrödinger equation, the puzzles of parenting—and of all human relations—have no clear-cut solutions.
Cognitive scientists propose that we have an innate ability to intuit what others are thinking and feeling. This talent is called, confusingly, “theory of mind.” It is crucial for social success, that is, for getting what we want from others. It is also crucial for morality. We are more likely to feel compassion for others, and to treat them well, if we can empathize with them. But our theory-of-mind program can only take us so far.
Last year, I joined a Black Lives Matter march that passed through my hometown. Some white protestors carried signs that said, “I understand that I will never understand. But I stand.” The sign implies that white people like me cannot understand what it is like being Black in America. That task is too hard. If we say we understand, that means we don’t; we’re revealing our ignorance and arrogance. But we can still express support for Black Americans.
This situation applies to gender, too. I recently got into an argument with my girlfriend about one of the most famous passages in literature, Molly Bloom’s soliloquy, which concludes James Joyce’s novel Ulysses. I love this profane, sexy, poetic masterpiece within a masterpiece, in which Joyce imagines what it feels like to be a married woman and mother living in early 20th-century Dublin. My girlfriend hates the soliloquy, which she says is a male fantasy about what women think. Instead of arguing with my girlfriend, I should have just said, “I understand that I will never understand. But I stand.”
Visionaries like Elon Musk hope that someday computer chips implanted in our brains will help us solve hard problems. That is why Musk founded Neuralink, which is building “high-bandwidth brain-machine interfaces.” The chips will link our brains to the internet and to powerful problem-solving programs, like Wolfram Alpha but much better.
I doubt that brain chips will help us with the problems that matter most. A brain chip might help the traveling salesman plan his itinerary, but it won’t tell him how to be a good husband and father, or how to avoid acting like a sexist or racist. It won’t tell him how to grab a little happiness without being a jerk. These problems are much harder than the hardest NP-hard problem.
While writing this column, I began to remember why, as a boy, I fantasized about becoming a mineralogist. I was already getting intimations of adulthood, and it didn’t appeal to me. Many adults seemed sad, or mean, or both. When I grow up, I thought, I will spend my days in a laboratory, alone, performing the Mohs test on crystalline specimens, testing their chemical reactivity, examining them through a microscope, admiring their perfect, symmetrical, inhuman beauty.
Further Reading:
I talk about a variety of hard problems in my two most recent books, Mind-Body Problems, available for free online, and Pay Attention: Sex, Death, and Science.
See also my podcast “Mind-Body Problems,” where I talk to experts about hard problems.
|
476cabd3e3d7f109 | Low-field electron mobility evaluation in silicon nanowire transistors using an extended hydrodynamic model
• Orazio Muscato
• Tina Castiglione
• Vincenza Di Stefano
• Armando Coco
Open Access
Part of the following topical collections:
1. Progress in Industrial Mathematics at ECMI 2016
Silicon nanowires (SiNWs) are quasi-one-dimensional structures in which electrons are spatially confined in two directions and they are free to move in the orthogonal direction. The subband decomposition and the electrostatic force field are obtained by solving the Schrödinger–Poisson coupled system. The electron transport along the free direction can be tackled using a hydrodynamic model, formulated by taking the moments of the multisubband Boltzmann equation. We shall introduce an extended hydrodynamic model where closure relations for the fluxes and production terms have been obtained by means of the Maximum Entropy Principle of Extended Thermodynamics, and in which the main scattering mechanisms such as those with phonons and surface roughness have been considered. By using this model, the low-field mobility of a Gate-All-Around SiNW transistor has been evaluated.
Nanowires Semiconductors Boltzmann equation Hydrodynamics
List of abbreviations
Effective Mass Approximation
Maximum Entropy Principle
Monte Carlo
Multiband Boltzmann Transport Equation
Schrödinger–Poisson system
Silicon nanowire
Surface roughness scattering
82D80 82D37 35Q20 75A15
1 Introduction
In the last decades nanotechnologies made possible the production of innovative devices with promises of high density integration, for an exponential increasing of electronic systems complexity. Nanostructures and nanotechnologies are reaching important breakthroughs in single molecule sensing and manipulation, with fundamental applications. In particular, among these nanostructures, silicon nanowires (SiNW) are largely investigated for the central role assumed by Silicon (Si) in the semiconductor industry. Such device can be used as transistors [1, 2], logic devices [3], and thermoelectric coolers [4, 5], but also for other application fields such as biological and nanomechanical sensors [6, 7]. When the physical size of the system becomes smaller, quantum effects on electronic properties become important and then a description via quantum mechanics is required. These quantum effects arise in systems which confine electrons to regions comparable to their de Broglie wavelength.
In a nanowire (NW) the electronic states become subject to quantization in the two-dimensional transversal section, and the transport is due to the one-dimensional electron gas in the longitudinal dimension.
2 Methods
Charge transport in SiNWs, under reasonable hypotheses on the device’s dimensions, can be tackled using the 1-D Multiband Boltzmann Transport Equation (MBTE) coupled self-consistently with the 3-D Poisson and 2-D Schrödinger equations, in order to obtain the self-consistent potential and subband energies and wavefunctions. However, solving the MBTE numerically is not an easy task, because it forms an integro-differential system in two dimensions in the phase-space and one in time, with a complicate collisional operator. An alternative is to take the moments of the MBTE to obtain hydrodynamic-like models where the resulting system of balance equations can be closed by resorting to the Maximum Entropy Principle (MEP).
In the following we shall focus primarily on the mathematical method itself, whereas a minor emphasis will be given to the physical model, because some simplifications will be made which could lead to doubtful results.
3 Transport physics in SiNWs
In SiNWs the band structure is altered with respect to the bulk silicon, depending on the cross-section wire dimension, the atomic configuration, and the crystal orientation. Atomistic simulations are able to capture the nanowire band structure, including information about band coupling and mass variations as functions of quantization [8, 9, 10, 11, 12, 13]. In this paper we shall limit ourselves to the results obtained via the empirical Tight-Binding (TB) model [9].
For a rectangular SiNW with longitudinal direction along the [100] crystal orientation, confined in the plane \((y,z)\), the 1-D Brillouin zone is 1/2 as long as the length of the bulk Si Brillouin zone along the Δ line (i.e. \(\pi /a _{0}\)). In SiNW the six equivalent Δ conduction valleys of the bulk Si are split into two groups because of the quantum confinement. The subbands related to the four unprimed valleys \(\Delta_{4}\) ([0 ± 10] and [00 ± 1] orthogonal to the wire axis) are projected into a unique valley in the Γ point of the one-dimensional Brillouin zone. The subbands related to the primed valleys \(\Delta_{2}\) ([±100] along the wire axis) are found at higher energies and exhibit a minimum located at \(k _{x}=\pm 0.37\pi /a _{0}\). The SiNW band gap, as well as the energy splitting between the \(\Delta_{2}\)\(\Delta_{4}\) valleys increases with decreasing diameter of the nanowire. Moreover the subband isotropy break down at energies of the order of 150 meV above the (bulk) conduction-band maximum. From the energy dispersion relation \(E(k)\) obtained from the TB, one can evaluate the effective mass \(m ^{*}\) in the parabolic spherical band approximation. In this paper we shall consider the parameters obtained in [9] (see Table 1), which are valid for diameters greater then 3 nm. These values will certainly be affected by non-parabolic corrections.
Table 1
Silicon nanowire constants
Physical constant
electron rest mass
9.1095 × 10−28 g
effective mass \(A =\Delta _{4}\) valley [9]
0.27 \(m_{e}\)
effective mass \(B =\Delta _{2}\) valley [9]
0.94 \(m_{e}\)
lattice temperature
300 K
mass density
2.33 g/cm3
average sound speed
9 × 105 cm/s
acoustic-phonon deformation potential
9 eV
intra-valley deformation potential g-scat [27]
1.1 × 109 eV/cm
\(\sim \omega _{o}\)
intra-valley phonon energy [27]
63.3 meV
number equivalent valleys [27]
inter-valley deformation potential f-scat [27]
2 × 108 eV/cm
\(\sim \omega _{iv}\)
inter-valley phonon energy [27]
47.48 meV
number equivalent valleys [27]
\(A =\Delta _{4}\) valley energy minimum [9]
\(B =\Delta _{2}\) valley energy minimum [9]
117 meV
rms height [27]
0.3 nm
correlation length [27]
1.5 nm
The main quantum transport phenomena in SiNWs at room temperature, such as the source-to-drain tunneling, and the conductance fluctuation induced by the quantum interference, become significant only when the channel lengths are smaller than 10 nm [14]. For longer longitudinal lengths, which is the case we are going to simulate, semiclassical formulations based on the 1-D BTE can give reliable terminal characteristics when it is solved self-consistently by adding the Schrödinger–Poisson equations in the transversal direction.
In the following, we shall consider a SiNW having rectangular cross section (with dimensions \(L _{y}\), \(L _{z}\)) in which electrons are spatially confined in the yz plane by a SiO2 layer which gives rise to a deep potential barrier having \(U= 3.2\mbox{ eV}\), and free to move in the orthogonal x direction, having dimension \(L _{x}\) (see Fig. 1). Hence, it is natural to assume the following ansatz for the electron wave function
$$\begin{aligned}& \phi(x, y, z)=\chi_{l}^{\mu}(y, z) \frac{e^{ik_{x}x}}{\sqrt{L_{x}}}, \end{aligned}$$
where μ is the valley index (one \(\Delta_{4}\) valley and two \(\Delta_{2}\) valleys), \(l= 1\), \(N _{\mathrm{sub}}\) the subband index, \(\chi ^{\mu } _{l}(y,z)\) is the subband wave function of the \(\sqrt{ l}\)th subband and μth valley, and the term \(e ^{ \mathit{ik_{x}x}}/ L _{x}\) describes an independent plane wave in x-direction, with wave-vector \(k _{x}\). The spatial confinement in the \((y,z)\) plane is governed by the Schrödinger–Poisson system (SP)
$$\begin{aligned}& \textstyle\begin{cases} H[V]\chi_{lx}^{\mu}[V]=\varepsilon_{lx}^{\mu}[V]\chi_{lx}^{\mu}[V], \\ H[V]=-\frac{\hbar^{2}}{2m_{\mu}^{*}} (\frac{\partial^{2}}{\partial y^{2}}+ \frac{\partial^{2}}{\partial z^{2}})+V_{\mathrm{tot}}(x, y, z), \\ \nabla\cdot[\epsilon_{0}\epsilon_{r}\nabla V(x,y,z)]=-e(N_{D}-N_{A}-n[V]), \\ n[V](x, y, z,t)=\sum_{\mu}\sum_{l}\rho_{l}^{\mu}(x,t) \vert \chi_{lx}^{\mu}[V](y, z, t) \vert ^{2}, \end{cases}\displaystyle \end{aligned}$$
where \(N _{D}\), \(N _{A}\) are the assigned doping profiles (due to donors and acceptors), and \(V_{ \mathrm{tot}}(x,y,z) = U(y,z) - eV (x,y,z)\) (in the Hartree approximation). The electron density \(n[V ]\) is given by (2)4, where \(\rho_{l}^{\mu}(x, t)\) is the linear density in the μ-valley and l-subband which must be evaluated by the transport model (hydrodynamic/kinetic) in the free movement direction. We emphasize that the use of the effective mass approximation (2)1 is probably valid for semiconductor nanowires down to 5 nm in diameter, below which atomistic electronic structure models need to be employed [15, 16]. The SP system forms a set of coupled nonlinear Partial Differential Equations, which are usually solved by an iteration between Poisson and Schrödinger equations. Since a simple iteration by itself does not converge, it is necessary to introduce an adaptive iteration scheme [17], where the Poisson equation has been solved by the finite-difference scheme proposed in [18], which can be used for every cross-section shape of the wire with complex geometries of the boundary/interface.
Figure 1
SiNW transistor. Cross sections of the Gate-All-Around SiNW transistor
Transport in the free direction is described by the multisubband Boltzmann Transport Equation (MBTE) [19]
$$\begin{aligned}& \frac{\partial f_{l}^{\mu}}{\partial t}+v_{\mu}(k_{x}) \frac{\partial f_{l}^{\mu}}{\partial x} - \frac{1}{\hbar}\frac{\partial\varepsilon^{\mu}_{l}}{\partial x} \frac{\partial f_{l}^{\mu}}{\partial k_{x}}= \sum _{\eta l'}\mathcal{C}_{\eta}\bigl[f_{l}^{\mu},f_{l'}^{\mu} \bigr] +\sum_{\mu}\sum_{\mu'\neq\mu} \sum_{l'}\mathcal{C}_{\eta}\bigl[f_{l}^{\mu}, f_{l'}^{\mu'}\bigr], \end{aligned}$$
where \(f_{l}^{\mu}= f_{l}^{\mu}(x,k_{x}, t)\) is the electron distribution function, \(v _{\mu }={\sim} k _{x} /m ^{*} _{\mu }\) is the electron group velocity. The RHS of equation (3) is the collisional operator, which is split into two terms modeling respectively scattering in the same valley (i.e. intra-valley with \(\mu =\mu^{0}\)) and into different valleys (i.e. inter-valley with \(\mu 6= \mu^{0}\)). In the low density approximation (not degenerate case), the collisional term for the ηth scattering rate is:
$$\begin{aligned}& \mathcal{C}_{\eta}\bigl[f_{l}^{\mu},f_{l'}^{\mu'} \bigr] \\& \quad =\frac{L_{x}}{2\pi} \int dk_{x}' \bigl\{ w_{\eta} \bigl(k_{x}', \mu', l', k_{x}, \mu, l\bigr) f_{l'}^{\mu'}\bigl(x, k_{x}', t\bigr) -w_{\eta}\bigl(k_{x}, \mu, l, k_{x}', \mu', l' \bigr)f_{l}^{\mu}(x, k_{x}, t)\bigr\} , \end{aligned}$$
where \(w_{\eta}(k_{x}', \mu', l', k_{x}, \mu, l)\) is the ηth scattering rate. Phonon scattering has been tackled following the bulk Si scattering selection rules [12] whose details are given in [20]. But this is no more than a simplification, because major differences in the transport properties can appear including confined phonons [21] and anisotropic deformation potentials [22].
Finally, another key scattering mechanism in SiNW is the Surface Roughness (SR) scattering. This is due to the random fluctuations of the boundaries that nominally form the confining potential in such a low-dimensional system. It depends on quantum confinement as well as it also depends very strongly on the charge density. The SR scattering can be in principle intra-valley or intervalley. However, its dependence on the transfer crystal momentum usually renders intervalley processes weaker. This scattering mechanism can be treated at very different levels of approximation, from fully atomistic models to semi-phenomenological models (see [23, 24]). In the following we shall use a very simple model introduced in [21], where imagecharge effects (due to the mismatch of the dielectric constants between Si and SiO2), and exchange-correlation energy (due to the electron–electron interaction) are neglected, which is reasonable for silicon thickness greater than 8 nm [25]. Moreover corner effects [26] have been neglected. In this case the SR scattering rate along the y-direction is
$$\begin{aligned}& w_{\mathrm{sr}}\bigl(k_{x}, \mu, l, k_{x}', \mu,l', E_{y}\bigr) \\& \quad =\frac{4\sqrt{2}e^{2}m_{\mu}^{*}}{ \hbar^{3}L_{x}} \frac{H(a)}{\sqrt{a}} \bigl[ \mathcal{F}_{ll'}^{\mu\mu}(E_{y}) \bigr]^{2} \frac{\lambda_{\mathrm{sr}}\Delta_{\mathrm{sr}}^{2}}{ (k_{x}-k_{x}')^{2}\lambda_{\mathrm{sr}}^{2}+2} \bigl[\delta\bigl(k_{x}'- \sqrt{a}\bigr)+\delta\bigl(k_{x}'+\sqrt{a}\bigr)\bigr], \end{aligned}$$
where \(H(x)\) is the Heaviside function, \(E_{y}(x,y, z) = -\frac{\partial V}{\partial y}\), \(\Delta_{\mathrm{sr}}\) and \(\lambda_{\mathrm{sr}}\) are the rms (root mean square) height and the correlation length of the fluctuations at the Si–SiO2 interface
$$\begin{aligned}& a=k_{x}^{2}+ \frac{2m_{\mu}^{*}}{\hbar} \bigl( \varepsilon^{\mu}_{l}-\varepsilon_{l'}^{\mu} \bigr) , \qquad \mathcal{F}_{ll}^{\mu\mu}(E_{y})= \int\bigl(\chi^{\mu}_{l}\bigr)^{\star}(y,z)E_{y}(x,y,z) \chi^{\mu}_{l'}(y,z)\,dy\,dz \end{aligned}$$
and \(\chi^{\mu } _{l}\), \(\varepsilon^{\mu } _{l}\) are given by solving equation (2). All parameters are listed in Table 1.
4 Extended hydrodynamic model
One of the most popular approaches is to solve the MBTE in a stochastic sense by Monte Carlo (MC) methods [21, 27, 28, 29] or by using deterministic numerical solvers [27, 30]. However, the extensive computations required by both methods as well as the noisy results obtained with MC simulations, make them impractical for device design on a regular basis.
Another alternative is to obtain from the MBTE hydrodynamic models that are a good engineering-oriented approach. This can be achieved by obtaining a set of balance equations by means the so called moment method. The idea is to investigate only some moments of interest of the distribution function. In this way a hierarchy of balance equations is obtained, which can be truncated at some order, supposing to determine the (unknowns) higher-order moments as well as the production terms (i.e., the moments on the collisional operator). In the last two decades, the Maximum Entropy Principle (MEP) has been successfully employed to close this hierarchy of balance equations. Important results have also been obtained for the description of charge/thermal transport in devices made both of elemental and compound semiconductors, in cases where charge confinement is present and the carrier flow is two- or one-dimensional (see [31] for a review). By multiplying both sides of the MBTE equation (3) by the weight functions
$$\overrightarrow{\psi}=(1,v_{\mu}, \mathcal{E}_{\mu}, \mathcal{E}_{\mu}v_{\mu}),\qquad v_{\mu} = \frac{\hbar k_{x}}{ m_{\mu}^{*}},\qquad \mathcal{E}_{\mu} =\frac{\hbar^{2}k^{2}_{x}}{ 2m^{*}_{\mu}} $$
and integrating with respect to \(k _{x}\), balance equations are obtained in the moment unknowns \((\rho^{\mu } _{l},V _{l} ^{\mu },W _{l} ^{\mu },S _{l} ^{\mu })\), from which one can evaluate
$$\begin{aligned}& \rho=\sum_{\mu,l}\rho_{l}^{\mu} \quad \mbox{total linear density}, \end{aligned}$$
$$\begin{aligned}& V=\frac{\sum_{\mu,l}\rho_{l}^{\mu}V_{l}^{\mu}}{\rho}\quad \mbox{mean velocity}, \end{aligned}$$
$$\begin{aligned}& W=\frac{\sum_{\mu,l}\rho_{l}^{\mu}W_{l}^{\mu}}{\rho}\quad \mbox{mean energy}, \end{aligned}$$
$$\begin{aligned}& S=\frac{\sum_{\mu,l}\rho_{l}^{\mu}S_{l}^{\mu}}{\rho}\quad \mbox{mean energy flux}. \end{aligned}$$
By exploiting the MEP, constitutive relations for the higher-order moments and the production can be obtained (see [32] for the details). In this way a physics-based hydrodynamic model is obtained, consistent with thermodynamics principles, valid in a larger neighborhood of local thermal equilibrium, and free of any tunable parameters.
5 Results and discussion
The main goal of this paper is to check if the above mentioned Extended hydrodynamic model is able to describe the quasi-equilibrium regime. Taking advantage of examples present in literature, we have considered a Gate-All-Around (GAA) SiNW transistor, with quadratic cross section. This is a Silicon nanowire with an added gate wrapped around it, in such a way we have a three contact device with source, drain, and gate. The device length is \(L _{x} = 120\) nm, the transversal dimensions \(L _{y}=L _{z} \leq 10\) nm, and the oxide thickness tox is 1 nm. The device is undoped, at room temperature and its cross sections are shown in Fig. 1.
An important parameter characterizing the quasi-equilibrum regime, useful for benchmarking different technology options and device architectures, is the low-field mobility [33, 34]. It is defined as the ratio between the average electron velocity, evaluated in the stationary regime, and a driving low electric field E, i.e.,
$$ \mu^{\mathrm{low}}=\frac{\sum_{A}\rho^{A}\mu^{A}}{\sum_{A}\rho^{A}}, \qquad \mu^{A}= \frac{\sum_{l}V_{l}^{A}}{E}, \qquad \rho^{A}=\sum_{l} \rho_{l}^{A}, $$
where \(\mu^{A}\) are the mobilities in the respective valleys, evaluated as function of the gate voltage \(V _{G}\). The subband densities \(\rho_{l}^{A}\) and velocities \(V_{l}^{A}\) are determined by solving the former hydrodynamic model with the following steps:
1. (i)
equilibrium solution
First of all, let us consider the thermal equilibrium regime where no voltage is applied to the contacts, i.e., \(V _{S}=V _{D}=V _{G} = 0\) and no current flows. Hence, the electron distribution function is the Maxwellian:
$$\begin{aligned}& f_{l}^{\mu(e q)}(k_{x})=N_{0} \exp \biggl(- \frac {\frac{\hbar^{2}k_{x}^{2}}{2m_{\mu}^{*}} +\varepsilon_{lx}^{\mu}+\varepsilon_{\mu}^{0}-\nu}{ k_{B}T}\biggr), \end{aligned}$$
where ν is the Fermi level, \(\varepsilon_{\mu}^{0}\) the valley energy minimum, and T the electron temperature, which we shall assume to be the same in each subband and equal to the lattice temperature \(T _{L}\). The condition of zero net current requires that the Fermi level must be constant throughout the sample, and it can be determined by imposing that the total electron number equals the total donor number in the wire. Then, the linear electron density at equilibrium is:
$$\begin{aligned}& \rho_{l}^{\mu(e,q)}(x)= \frac{N_{D}L_{y}L_{z}\sqrt{m_{\mu}^{*}}}{ \mathcal{Z}^{(eq)}} \exp \biggl[\frac{-\varepsilon_{lx}^{\mu(eq)}-\varepsilon_{\mu}^{0}}{ k_{B}T}\biggr], \end{aligned}$$
$$\begin{aligned}& \mathcal{Z}^{(eq)}=\sum_{\mu,l} \sqrt{m_{\mu}^{*}}\exp\biggl[\frac {-\varepsilon_{lx}^{\mu(eq)}-\varepsilon_{\mu}^{0}}{ k_{B}T}\biggr], \end{aligned}$$
where the subband energies \(\varepsilon_{lx}\) are obtained by solving the SP system (2).
1. (ii)
quasi-equilibrium solution
Now, we consider the quasi-equilibrium regime, where a very small axial electric field frozen along the channel (\(E= 1000\) V/cm) is applied, and we turn on the gate. The system is still in local thermal equilibrium, the distribution function is the Maxwellian, but some charge flows in the wire. The linear density can be written as
$$\begin{aligned}& \rho_{l}^{\mu}= \frac{N_{D}L_{y}L_{z}\sqrt{m_{\mu}^{*}}}{ \mathcal{Z}^{(eq)}} \exp\biggl[ \frac{-\varepsilon_{lx}^{\mu}-\varepsilon_{\mu}^{0}}{ k_{B}T}\biggr], \end{aligned}$$
where the only difference between equations (13) and (15) is in the energy subbands \(\varepsilon^{\mu } _{lx}\), which now are obtained solving the SP system (2) with \(V _{S} = 0.012\) V, \(V _{D} = 0\) V, and \(V _{G}\) variable. Once the solution has been obtained, the energies \(\varepsilon^{\mu } _{lx}\) and wave functions \(\chi^{\mu } _{lx}\) for each subband are fixed and exported into the hydrodynamic model.
1. (iii)
low-field mobility determination
Since the wire is undoped, with a frozen electric field along its x-axis, we can skip the spatial dependence in the hydrodynamic model, which reduces to a system of Ordinary Differential Equations. The energies \(\varepsilon^{\mu } _{lx}\) and wave functions \(\chi^{ \mu } _{lx}\) for each subband are imported from the previous steps (and kept fixed), as well as the linear density (15) which is used as initial condition. The other initial conditions are
$$\begin{aligned}& V_{l}^{\mu}(0)=0,\qquad W_{l}^{\mu}(0)= \frac{1}{2}k_{B}T_{L}, \qquad S_{l}^{\mu}(0)=0. \end{aligned}$$
The hydrodynamic system has been solved using a standard Runge–Kutta algorithm. The simulation stops when the stationary regime has been reached obtaining the subband densities and velocities, and finally the low-field mobility (11) has been evaluated.
As a case study, we have fixed \(L _{x}=L _{y} = 8\) nm and run the code, changing \(V _{G}\). The numerical experiments indicate that it is sufficient to take into account only the first four subbands, since the other ones are very scarcely populated. For the solution of step (ii), the Schrödinger–Poisson block has been solved with a maximum of 25 iterations, with a CPU time of few minutes. The subband energies \(\varepsilon^{\mu } _{lx}\) and wave functions \(\chi ^{\mu } _{l}(y,z)\) for the \(\Delta_{4}\) valley and the first four subbands are shown in Figs. 26 for \(V _{G} = 0.6\) V. We notice from Fig. 2 that, for \(\mu = 1\), the subband energies \(\varepsilon^{\mu } _{lx}\) coincide for \(l= 2\) and \(l= 3\) (see dot green and blue circle curves in Fig. 2), and the corresponding wave functions \(\chi ^{\mu } _{l} (y,z)\) show a symmetry (see Figs. 4 and 5). This behaviour is due to the quadratic cross section, in accordance to the infinitely deep quantum wire case [35].
Figure 2
Subband energies. Subband energies \(\varepsilon ^{\mu } _{lx}\) versus longitudinal dimension x, for \(V _{G} = 0.6\) V
Figure 3
Subband Wave function. Subband wave function \(\chi ^{ \mu } _{l}(y,z)\) for \(\mu = 1\) (\(\Delta_{4}\) valley), subband \(l= 1\) in the cross section \(x= 60\) nm, for \(V _{G} = 0.6\) V
Figure 4
Figure 5
Figure 6
About step (iii), the stationary regime of the hydrodynamic system has been reached in some ps, and the CPU effort varies according to the voltage \(V _{G}\) with a maximum of one hour.
The electron density (2)4 in the cross section \(x= 60\) nm, perpendicular to the transport direction, is shown in the Figs. 7, 8, 9 for \(V _{G} = 0.16,0.6,1\) V respectively. For small gate voltage, the volume charge is peaked in the center of the wire as shown in Fig. 7. As the gate voltage increases, the electron density is peaked close to the oxide interface (see Figs. 8 and 9). This phenomenon can be seen also in Fig. 10 where we plot the electron density (2)4 and total potential \(V _{\mathrm{tot}}\) in the cross section \(y= 0\) nm and \(x= 60\) nm, for \(V _{G} = 0.6\) V. In particular one can observe the effect of the wave function penetration in the oxide and the formation of a surface inversion layer, similar to a usual MOSFET channel.
Figure 7
Density. Electron density (2)4 in the cross section perpendicular to the transport direction (\(x= 60\) nm), for \(V _{G} = 0.16\) V
Figure 8
Density. Electron density (2)4 in a cross section perpendicular to the transport direction (\(x= 60\) nm), for \(V _{G} = 0.6\) V
Figure 9
Figure 10
Density and potential. Electron density (2)4 and total potential \(V_{\mathrm{tot}}\) in the cross sections \(y= 0\) nm and \(x= 60\) nm, for \(V _{G} = 0.6\) V
In Fig. 11 we show the low-field mobility as function of the effective field, obtained by including/excluding the SR scattering mechanism. From this figure it is clear how the SR is the key scattering mechanism as it yields a very strong dependence of the low-field electron mobility on the effective field. The obtained results are very similar to those obtained by means of Monte Carlo simulations [21].
Figure 11
Low-field mobility. Low-field mobility versus the Effective Field, obtained with/without Surface Roughness Scattering mechanism, for a 8 × 8 nm2 SiNW
Finally in Fig. 12 we show the low-field mobility as function of the wire cross section (\(L _{x}=L _{y}\)), for some values of the effective field. We observe that the mobility decreases with shrinking of the wire cross section, in qualitative accordance to the results obtained by MC simulations by Ramayya et al. [21].
Figure 12
Low-field mobility. Low-field mobility versus wire width and thickness (\(L _{x}=L _{y}\)), obtained with Surface Roughness Scattering mechanism, for some values of the Effective Field
The presented results have been obtained using MATLAB running in an AMD Phenom II X6 1090T 3.2 GHz and 8 Gb RAM.
6 Conclusions
We present a theoretical study of low-field electron mobility in a Gate-All-Around silicon nanowires, having rectangular cross section, based on a hydrodynamic model coupled to the Schrödinger–Poisson equations. The hydrodynamic model has been formulated by taking the moments of the multisubband Boltzmann equation, and by closing the obtained hierarchy of balance equations with the use of the Maximum Entropy Principle. The most relevant scattering mechanisms, such as scattering of electrons with acoustic and non-polar optical phonons and surface roughness, have been included. The results show a good qualitative agreement with data available from the literature, confirming that this hydrodynamic model is valid in the quasiequilibrium regime limit. The study of off-equilibrium transport phenomena as well as of thermoelectric effects for such structures, using also circular cross-sections of the wire, will be the subjects of future researches.
We acknowledge the support of the project “Modellistica, simulazione e ottimizzazione del trasporto di cariche in strutture a bassa dimensionalità”, Università degli Studi di Catania—Piano della Ricerca 2016/2018 Linea di intervento 2.
Availability of data and materials
Authors’ contributions
All authors have jointly worked to the manuscript with an equal contribution. All authors read and approved the final manuscript.
This research has been supported by Università degli Studi di Catania.
Competing interests
The authors declare that they have no competing interests.
1. 1.
Singh N, Agarwal A, Bera LK, Liow TY, Yang R, Rustagi SC, Tung CH, Kumar R, Lo GQ, Balasubramanian N, Kwong D-L. High-performance fully depleted silicon nanowire (diameter ≤ 5 nm) gate-all-around CMOS devices. IEEE Electron Device Lett. 2006;27(5):383–6. CrossRefGoogle Scholar
2. 2.
Guerfi Y, Larrieu G. Vertical silicon nanowire field effect transistors with nanoscale Gate-All-Around. Nanoscale Res Lett. 2016;11:210. CrossRefGoogle Scholar
3. 3.
Mongillo M, Spathis P, Katsaros G, Gentile P, Franceschi SD. Multifunctional devices and logic gates with undoped silicon nanowires. Nano Lett. 2012;12(6):3074–9. CrossRefGoogle Scholar
4. 4.
Pennelli G, Macucci M. Optimization of the thermoelectric properties of nanostructured silicon. J Appl Phys. 2013;114:214507. CrossRefGoogle Scholar
5. 5.
Pennelli G. Review of nanostructured devices for thermoelectric applications. Beilstein J Nanotechnol. 2014;5:1268–84. CrossRefGoogle Scholar
6. 6.
Li Q, Koo S-M, Edelstein MD, Suehle JS, Richter CA. Silicon nanowire electromechanical switches for logic device application. Nanotechnology. 2007;18(31):315202. CrossRefGoogle Scholar
7. 7.
Cao A, Sudhölter EJR, de Smet LCPM. Silicon nanowire based devices for gas-phase sensing. Sensors. 2014;14:245–71. CrossRefGoogle Scholar
8. 8.
Nehari K, Cavassilas N, Autran JL, Bescond M, Munteanu D, Lannoo M. Influence of band structure on electron ballistic transport in silicon nanowire MOSFETs: an atomistic study. Solid-State Electron. 2006;50:716–21. CrossRefGoogle Scholar
9. 9.
Zheng Y, Rivas C, Lake R, Alam K, Boykin TB, Klimeck G. Electronic properties of silicon nanowires. IEEE Trans Electron Devices. 2005;52(6):1097–103. CrossRefGoogle Scholar
10. 10.
Gnani E, Reggiani S, Gnudi A, Parruccini P, Colle R, Rudan M, Baccarani G. Band-structure effects in ultrascaled silicon nanowires. IEEE Trans Electron Devices. 2007;54(9):2243–54. CrossRefGoogle Scholar
11. 11.
Neophytou N, Paul A, Lundstrom MS, Klimeck G. Bandstructure effects in silicon nanowire electron transport. IEEE Trans Electron Devices. 2008;55(6):1286–97. CrossRefGoogle Scholar
12. 12.
Neophytou N, Kosina H. Atomistic simulations of low-field mobility in Si nanowires: influence of confinement and orientation. Phys Rev B. 2011;84:085313. CrossRefGoogle Scholar
13. 13.
Shin M, Jeong WJ, Lee J. Density functional theory based simulations of silicon nanowire field effect transistors. J Appl Phys. 2016;119:154505. CrossRefGoogle Scholar
14. 14.
Wang J, Lundstrom M. Does source-to-drain tunneling limit the ultimate scaling of MOSFETs? 2002. p. 707–10. IEDM Tech. Dig. Google Scholar
15. 15.
Wang J, Rahman A, Ghosh A, Klimeck G. On the validity of the parabolic effective-mass approximation for the I–V calculation of silicon nanowire transistors. IEEE Trans Electron Devices. 2005;52(7):1589–95. CrossRefGoogle Scholar
16. 16.
Neophytou N, Paul A, Lundstrom MS, Klimeck G. Simulations of nanowire transistors: atomistic vs. effective mass models. J Comput Electron. 2008;7:363–6. CrossRefGoogle Scholar
17. 17.
Trellakis A, Galik T, Pacelli A, Ravaioli U. Iteration scheme for the solution of the two-dimensional Schrödinger–Poisson equations in quantum structures. J Appl Phys. 1997;81:7880–4. CrossRefGoogle Scholar
18. 18.
Coco A, Russo G. Finite-difference ghost-point multigrid methods on Cartesian grids for elliptic problems in arbitrary domains. J Comp Physiol. 2013;241:464–501. MathSciNetCrossRefGoogle Scholar
19. 19.
Jin S, Tang T-W, Fischetti MV. Simulation of silicon nanowire transistors using Boltzmann transport equation under relaxation time approximation. IEEE Trans Electron Devices. 2008;55(3):727–36. CrossRefGoogle Scholar
20. 20.
Castiglione T, Muscato O. Non-parabolic band hydrodynamic model for silicon quantum wires. J Comput Theor Transp. 2017;46(3):186–201. MathSciNetCrossRefGoogle Scholar
21. 21.
Ramayya EB, Vasileska D, Goodnick SM, Knezevic I. Electron transport in silicon nanowires: the role of acoustic phonon confinement and surface roughness scattering. J Appl Phys. 2008;104:063711. CrossRefGoogle Scholar
22. 22.
Murphy-Armando F, Fagas G, Greer JC. Deformation potentials and electron-phonon coupling in silicon nanowires. Nano Lett. 2010;10:869–73. CrossRefGoogle Scholar
23. 23.
Wang J, Polizzi E, Ghosh A, Datta S, Lundstrom M. Theoretical investigation of surface roughness scattering in silicon nanowire transistors. Appl Phys Lett. 2005;87:043101. CrossRefGoogle Scholar
24. 24.
Fischetti MV, Narayanan S. An empirical pseudopotential approach to surface and line-edge roughness scattering in nanostructures: application to Si thin films and nanowires and to graphene nanoribbons. J Appl Phys. 2011;110:083713. CrossRefGoogle Scholar
25. 25.
Jin S, Fischetti MV, Tang T-W. Modeling of surface-roughness scattering in ultrathin-body SOI MOSFETs. IEEE Trans Electron Devices. 2007;54(9):2191–203. CrossRefGoogle Scholar
26. 26.
Ruiz FJG, Godoy A, Gamiz F, Sampedro C, Donetti L. A comprehensive study of the corner effects in Pi-Gate MOSFETs including quantum effects. IEEE Trans Electron Devices. 2007;54(12):3369–77. CrossRefGoogle Scholar
27. 27.
Lenzi M, Palestri P, Gnani E, Reggiani S, Gnudi A, Esseni D, Selmi L, Baccarani G. Investigation of the transport properties of silicon nanowires using deterministic and Monte Carlo approaches to the solution of the Boltzmann transport equation. IEEE Trans Electron Devices. 2008;55(8):2086–96. CrossRefGoogle Scholar
28. 28.
Ramayya EB, Knezevic I. Self-consistent Poisson–Schrödinger–Monte Carlo solver: electron mobility in silicon nanowires. J Comput Electron. 2010;9:206–10. CrossRefGoogle Scholar
29. 29.
Ryu H. A multi-subband Monte Carlo study on dominance of scattering mechanisms over carrier transport in sub-10-nm Si nanowire FETs. Nanoscale Res Lett. 2016;11:36. CrossRefGoogle Scholar
30. 30.
Ossig G, Schürrer F. Simulation of non-equilibrium electron transport in silicon quantum wires. J Comput Electron. 2008;7:367–70. CrossRefGoogle Scholar
31. 31.
Mascali G, Romano V. Exploitation of the maximum entropy principle in mathematical modeling of charge transport in semiconductors. Entropy. 2017;19(1):36. CrossRefGoogle Scholar
32. 32.
Muscato O, Castiglione T. A hydrodynamic model for silicon nanowires based on the maximum entropy principle. Entropy. 2016;18:368. CrossRefGoogle Scholar
33. 33.
Silvestri L, Reggiani S, Gnani E, Gnudi A, Baccarani G. A low-field mobility model for bulk and ultrathin-body SOI p-MOSFETs with different surface and channel orientations. IEEE Trans Electron Devices. 2010;57(12):3287–94. CrossRefGoogle Scholar
34. 34.
Jin S, Fischetti MV, Tang T-W. Modeling of electron mobility in gated silicon nanowires at room temperature: surface roughness scattering, dielectric screening, and band nonparabolicity. J Appl Phys. 2007;102(12):083715. CrossRefGoogle Scholar
35. 35.
Harrison P. Quantum well, wires and dots. Chichester: Wiley; 2005. CrossRefGoogle Scholar
Copyright information
© The Author(s) 2018
Authors and Affiliations
1. 1.Department of Mathematics and Computer ScienceUniversity of CataniaCataniaItaly
2. 2.School of Engineering, Computing and MathematicsOxford Brookes UniversityOxfordUK
Personalised recommendations |
65704e8ef0734a40 | Batsheva de Rothschild Seminar
ISF - Israel Science Foundation workshop on
Light-matter Interaction:
Focus on Novel Observable nonHermitian Phenomena
Kibbutz Ein-Gedi
April 21st-26th, 2013
The Israel Academy of Sciences & Humanities - The Batsheva de Rothschild Fund
Israel Science Foundation
Technion institute of advanced studies in theoretical chemistry
Nimrod Moiseyev (Technion)
Moti Segev (Technion)
Yaron Silberberg (Weizmann)
Doron Cohen (Ben-Gurion)
Gila Gutman- reservation, accommodation and transportation correspondence
Motivation for using non-Hermitian formalism
Michael Berry, H H Wills Physics Laboratory, University of Bristol, UK
NH: PT’s big brother
Hermitian (H) and PT symmetric hamiltonians are overlapping subsets of the class of all (mostly
nonhermitian - NH) operators governing physical evolution. Recent claims that PT is physically more
fundamental than H are critically examined. The interesting behaviour of nonhermitian PT symmetric
operators is associated with degeneracies, and is not characteristic of PT but rather is a property of the wider
class of all NH operators, understood in light and atom optics for many years.
Avraham Nitzan, Tel-Aviv University
Non-Hermitian quantum mechanics in molecular transport problems
Molecular transport problems, by definition, involve open quantum systems, where description by nonhermitian Hamiltonians evolve naturally from the need to describe such processes within a finite
computational framework. The derivation of the golden rule from steady state quantum mechanics provides
the simplest example. The use of absorbing boundary conditions in scattering computations is another
conceptually simple example that serves to illustrate the possibility to choose between phenomenological
absorbing boundary conditions and exact calculation of the system's self energy. More general situations that
lead to stochastic dynamics will be also discussed and the difference between population and phase relaxation
will be emphasized. Finally, modifications of the system Hamiltonian that reflect measurement processes will
be illustrated.
Lenz Cederbaum, Heidelberg University
ICD and Dynamic Interference in Free Electron Lasers by non-Hermitian Quantum Mechanics
Non-Hermitian degeneracy (Exceptional points)
Raam Uzdin , The Hebrew University of Jerusalem
Time dependent non-unitary systems and non-Hermitian resources
After briefly exploring a few effects unique to non-Hermitian time-dependent Hamiltonians, we will show
that the energy difference of the instantaneous Hamiltonian does not completely capture the Hamiltonian
capability to change states. This is related to the fact that non-Hermitian degeneracies have another energy
scale which does not appear in the Hermitian case. We will present an alternative formalism and then quantify
the minimal resources needed for "magic" non unitary operations such as "faster than Hermitian" motion, and
perfect state discrimination of non-orthogonal states.
Eva-Maria Graefe, Imperial College
Signatures of three coalescing eigenfunctions
Parameter dependent non-Hermitian quantum systems typically not only possess eigenvalue degeneracies, but
also degeneracies of the corresponding eigenfunctions at exceptional points. Here we present a
characterisation of behaviours of symmetric Hamiltonians with three coalescing eigenfunctions, using
perturbation theory for non-Hermitian operators. Two main types of parameter perturbations need to be
distinguished, which lead to characteristic eigenvalue and eigenvector patterns under cyclic variation. A
physical system is introduced for which both behaviours might be experimentally accessible.
Uwe Guenther ,Helmholtz center Dresden
Nonlinear PT-symmetric plaquettes
Nonlinear coupled gain-loss oscillator oligomers (plaquettes) of 4-node and 5-node type in a 2D-plane are
studied. Their specific PT-symmetry properties are investigated, the occurrence of exceptional points (up to
third-order) as well as their various nonlinear dynamical regimes.
collaboration with: Kai Li, Panayotis Kevrekidis and Boris Malomed
paper: J Phys A 45 (2012) 444021
Realization of exceptional points
Achim Richter , Technische Universität Darmstadt, Germany
Exceptional Points in Microwave Billiards with and without Time Reversal Symmetry *
After a brief introduction into generic features of microwave resonators as quantum billiards and open
scattering systems it is discussed how eigenvalues, eigenfunctions and eigenvectors of a dissipative time
reversal invariant and non-invariant system, respectively, close to and at an exceptional point (EP) are
determined experimentally [1,2]. The full EP-Hamiltonian can be extracted from the measured scattering
matrix. While the EP is encircled the eigenvectors gather geometric phases and in addition amplitudes
different from unity. Finally, the presence of parity-time (PT) symmetry for the non-Hermitian two-state
Hamiltonian in the vicinity of the EP is shown.
* Work supported within the SFB634 by the Deutsche Forschungsgemeinschaft
[1] C. Dembowski, H.-D. Gräf, H.L. Harney, A. Heine, W.D. Heiss, H. Rehfeld, and A. Richter, Phys. Rev.
Lett. 86, 787 (2001)
[2] B. Dietz, H. L. Harney, O.N. Kirillov, M. Miski-Oglu, A. Richter, and Florian Schäfer,
Phys. Rev. Lett. 106, 150403 (2011)
[3] S. Bittner, B. Dietz, U. Günther, H.L. Harney, M. Miski-Oglu, A. Richter, and F. Schäfer,
Phys. Rev. Lett. 108, 024101 (2012)
Demetrios Christodoulidies ,University of Central Florida
Non-Hermitian and PT-symmetric Optical Systems
Realization of exceptional points
Holger Cartarius, Stuttgart University
Exceptional points and PT symmetry in Bose-Einstein condensates
Work done together with: D. Dast, R. Eichler, D. Haag, W. D. Heiss, M. Kreibich, J. Main, G. Wunner
In Bose-Einstein condensates exceptional points appear in various cases. They are, e.g. connected with
stability thresholds of the s-wave scattering length below which a collapse of the condensate sets in. Recently
it has been shown that exceptional points also appear for Bose-Einstein condensates in an external PTsymmetric potential. To study the effects of PT symmetry in Bose-Einstein condensates we follow a proposal
by Klaiman et al. of a Bose-Einstein condensate in a double-well setup. Particles removed from one well and
coherently injected into the other can be described by an imaginary gain-loss parameter rendering the external
potential complex PT symmetric. It has been shown that PT-symmetric wave functions exist and thus the PT
symmetry is not destroyed by the nonlinearity of the Gross-Pitaevskii equation. In agreement with their
counterparts in linear quantum mechanics they merge in a second-order exceptional point (EP2). A crucial
difference is the bifurcation of PT-broken states from one of the PT-symmetric eigenstates for gain-loss
parameters far below the EP2. The new bifurcation point turns out to be a third-order exceptional point (EP3).
We investigate the exceptional points appearing in Bose-Einstein condensates and describe their physical
consequences, i.e. PT symmetry breaking and an important influence on the dynamics of the condensate's
wave function. Suggestions for experimental realizations are presented.
Stefan Rotter ,Vienna University, Institute for Theoretical Physics
Pump-Controlled Exceptional Points and Random Laser Emission
I will show that the above-threshold behavior of a laser can be strongly affected by exceptional points which
are induced by pumping the laser non-uniformly [1]. In the vicinity of these points the laser may turn off even
when the overall pump power deposited in the system is increased. In extension of this work, we could
recently demonstrate that suitably optimized pump profiles allow us to control the angular emission pattern of
a random laser such as to achieve highly directional emission or any other desired pattern [2].
[1] Liertzer, Ge, Cerjan, Stone, Tureci, and Rotter, Phys. Rev. Lett. 108, 173901 (2012).
[2] Hisch, Liertzer, Pogany, Mintert, Rotter (to be submitted)
Dynamical control by external fields
Daniel Strasser ,The Hebrew University of Jerusalem
Multiple detachment of SF6- molecular ions by shaped intense laser pulses
Since the development of intense femtosecond laser technologies, significant effort was made towards
understanding of the basic interaction of such intense laser pulses with matter. In particular, the study of
multiple ionization of neutral atoms and high order harmonic generation led to the development of a
simplified semi-classical understanding of these highly non-perturbative mechanisms, commonly known as
the “three step model”. Furthermore, this basic understanding enabled the technological breakthroughs that
allow generating record breaking attosecond pulses, and to perform time resolved experiments on the
attoscond timescale. In my presentation I will discuss our recent experimental work, using fast beam
photofragment spectroscopy to explore the interaction of shaped intense laser pulses with the SF6- negatively
charged molecular ion.
Peter Schmelcher, Centre for Optical Quantum Technologies, Luruper Chaussee 149, 22761 Hamburg, and
Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany
Ultralong-range molecules in external fields
We explore the possibility to shape and control the properties and behaviour of ultralong-range molecules
with external electric or electric and magnetic fields. The interaction of the neutral ground state atom with the
highly excited Rydberg electron is described via a Fermi-pseudopotential approximation, including s-wave
and p-wave contributions. Using a non-relativistic pseudopotential for the binding of the electron to the ionic
core, which properly describes the quantum defects of the scattering of the electron off the core, a basis is
obtained within which the combined molecular Rydberg-perturber Hamiltonian in the presence of the external
fields is diagonalized. As a result the adiabatic Born-Oppenheimer potential energy surfaces, which determine
the molecular configurations and dynamics, are obtained. Varying the electric field strength the oscillating
energy surfaces possessing many well-separated and pronounced local potential wells, can be shifted with
respect to each other. In particular, it is possible to avoid the intersection of p-wave dominated states with swave dominated ones, which is ubiquitous in the absence of fields, close to the equilibrium vibrational
ground states. This leads to an enhanced stability of the ultralong-range molecular states. The corresponding
vibrational dynamics is analyzed. Finally, we present a preparation scheme for high-l molecular electronic
states via a two photon excitation process.
In a second step, we show the existence of ultra-long-range giant dipole molecules in crossed electric and
magnetic fields formed by a neutral alkali ground state atom that is bound to the decentered electronic wave
function of a giant dipole atom. Giant dipole atoms in crossed fields are of peculiar shape: their highly excited
electronic wave function is due to the combined. action of electric and magnetic fields strongly decentered and
localized completely off the ionic core. The neutral ground state atom is bound to this isolated Rydberg cloud
and, opposite to the standard e.g. trilobite molecules, does not possess any radiative decay channel. The
resulting molecules are truly giant with internuclear distances up to several $\mu m$. We analyze the resulting
three-dimensional adiabatic potential energy surfaces depending on the degree of electronic excitation:
Disturbed torus-like multi-well structure are observed for excited states which can, due to avoided crossings,
show an increasing topological complexity. Binding energies and the vibrational motion in the energetically
lowest surfaces are analyzed by means of perturbation theory and exact diagonalization techniques.
Finally, we demonstrate the existence of intersection manifolds of excited electronic states that potentially
lead to an 'ultrafast' vibrational decay of the ground state atom dynamics.
Non Hermitian optics
A. Douglas Stone , Dept. of Applied Physics – Yale University
For Light-Matter Interactions: Focus on Novel Observable Non-Hermitian Phenomena
Lasers and Anti-lasers: Non-Hermitian to the Max
Amplifiers and attenuators in optics are prototype non-hermitian systems described by non-energy conserving
electromagnetic wave equations. A laser at threshold is the extreme case where the amplification goes to
infinity, the system starts to self-oscillate, and the non-linearity of the gain medium is required to stabilize the
steady-state. Recently we have developed, and will review here, Steady-state Ab initio Laser Theory (SALT),
which calculates directly all of the lasing properties in the non-linear multimode steady-state, treating the nonhermiticity exactly, for complex laser cavities and pumping schemes. SALT emphasizes that a laser is a
certain kind of non-hermitian scattering system, and the laser threshold corresponds to a pole of the S-matrix
on the real axis. Applying time-reversal to the threshold laser equation maps it to an S-matrix with a zero on
the real axis, describing a coherent perfect absorber (CPA), a lossy cavity which perfectly absorbs the timereverse of the lasing mode in steady state. Finally, a cavity with parity-time symmetry, and sufficient
gain/loss, can function simultaneously as a CPA and laser, with a pole and zero coincident on the real axis.
More generally, cavities with non-uniform gain or loss are described by non-hermitian equations, which have
exceptional points that determine the motion of poles and zeros, and lead to novel, observable optical
phenomena involving lasing and perfect absorption.
Tsampikos Kottos, Wesleyan University
Taming wave propagation via Parity-Time symmetry
Using integrated photonics and electronics as a play-fields we show how one can construct new circuitry
designs that allow for asymmetric wave transport by making use of novel properties emerging from PTsymmetries.
Jan Wiersig , University of Magdeburg
Non-Hermitian phenomena in passive optical microcavities
Optical microcavities are inherently open systems due to leakage of light through the cavity’s boundary.
In the past such systems have been successfully described by non-Hermitian effective Hamiltonians .
In this talk, two interesting phenomena based on the non-Herminicity are discussed. The first one can be
observed when two microdisk cavities are coupled to form a photonic molecule. Experimental and numerical
data is presented which demonstrates that near an avoided resonance crossing the quality factor of one mode
can significantly increase above the non-resonant values of both modes.
The second phenomenon shows up in deformed or otherwise slightly perturbed microdisks which do not
possess any mirror symmetry. A theoretical analysis and numerical simulations show that the long-lived
modes in such cavities come in highly non-orthogonal pairs of modes. Within each pair a propagation
direction (clockwise or counterclockwise) dominates. The physical origin of this ‘chirality’ is asymmetric
backscattering of clockwise and counterclockwise propagating waves. This phenomenon is linked to the
presence of non-Hermitian degeneracies. Finally, such asymmetric cavities are coupled to form optical
waveguides with unusual dispersion properties.
Non Hermitian optics (cont.)
Hui Cao ,Yale University
Time-Reversed Lasing and Interferometric Control of Absorption
Optical absorption is usually considered deleterious, something to avoid if at all possible. We show that a
perfect absorption not only leads to complete coupling of light into plasmonic nanostructures, but also
produces subwavelength focusing by suppressing diffraction. Based on the time-reversed process of lasing
action, we propose and demonstrate an interferometric control of coherent light absorption.
Mikael Rechtsman,Technion
Photonic Topological Insulators
Non Hermitian optics (cont.)
Barbara Dietz ,TU Darmstadt
Bound States in Bent Waveguides: Analytical and Experimental Approach
Bound states in quantum wires or open electromagnetic waveguides with curves, bends or bulges have
received much interest since their existence is a purely wave-dynamical phenomenon which has no classical
analogue. We present microwave experiments with sharply bent waveguides may have arbitrarily many bound
states depending on the angle of the bend: new bound states emerge at certain critical angles. Several
waveguides with bending angles close to these critical ones were investigated. In particular, we studied the
features of the transition from bound to unbound states caused by the variation of the bending angle.
Furthermore, the effect of the finite length of the waveguide was investigated. With the help of an effective
potential approach we computed the bound states and the critical bending angles [1]. The analytical results
were confirmed by numerical calculations as well as experimental measurements of the spectra and electric
field intensity distributions of electromagnetic waveguides.
The work was supported by the DFG within SFB 634.
[1] S. Bittner, B. Dietz, M. Miski-Oglu, A. Richter, C. Ripp, E. Sadurní, and W. P. Schleich, Phys. Rev. E, in
Konstantinos G. Makris, Department of Electrical Engineering, Princeton University
Wave propagation in PT-symmetric optical potentials
The first experimental observation of parity-time (PT) symmetry breaking in any physical system was recently
demonstrated. In this context of PT-symmetric Optics, we examine the characteristics of multimode PTsymmetric potentials. The existence of many spatial supermodes leads to multiple spectral phase transitions,
and vortex optical currents in the transverse poynting vector. Resent results regarding diffraction dynamics
close to the exceptional points of PT lattices, negative refraction effects, as well as, scattering in PT optical
cavities will also be presented.
Li Ge, Princeton University
Antisymmetric PT-photonic structures with balanced positive and negative index materials
In this talk I will discuss a new class of synthetic optical materials in which the refractive index satisfies
n(-x)=-n*(x). We term such systems antisymmetric parity-time (APT) structures. Unlike PT- symmetric
systems which require balanced gain and loss, i.e. n(-x)=n*(x), APT systems consist of balanced positive and
negative index materials. Despite the seemingly PT-symmetric optical potential V(x)=n(x)^2\omega^2/c^2,
APT systems are not invariant under combined PT operations due to the discontinuity of the spatial derivative
of the wavefunction. We show that APT systems can display intriguing properties such as spontaneous phase
transition of the scattering matrix, bidirectional invisibility, and a continuous lasing spectrum.
Wednesday 24/4/2103
Strong Laser Physics
Nirit Dudovich , Weizmann Institute of Science
When does an electron exit a tunneling barrier?
When induced by an intense laser field, electron tunneling from atoms and molecules initiates a broad range
of processes that evolve on the attosecond time-scale1,2. As the liberated electron is driven by the laser field, it
can return to the parent ion and recombine to the initial (ground) state, releasing its energy in an attosecond
burst of light. This process, known as High Harmonic Generation (HHG) provides an excellent spatiotemporal filter for the electron motion. The angstrom-scale spatial resolution is determined by the size of the
atomic ground state to which the electron must recombine. The attosecond temporal resolution arises from the
mapping between the photon energy (harmonic order) and the return time of the corresponding electron
In the talk I will describe how by adding a weak perturbation allows us to probe both the ionization times and
the recollision times in simple atomic systems3. Our results which deviate from the simple classical model are
in good agreement with the quantum path analysis. Next, I will describe how a similar approach enables us to
measure the instantaneous tunneling probability within the optical cycle. Finally, I will discuss the probing of
molecular systems where more than one ionization channel participates the process4,5. As an example I will
show how multiple channel ionization is probed in aligned CO2 molecules. I will describe how the high
sensitivity of the measurement allows us to probe subtle differences between two ionization channels 3. This
experiment provides an additional, important step towards the ability to resolve multielectron phenomena -- a
long term goal of attosecond studies.
Keldysh, L. V. Ionization in the field of a strong electromagnetic wave. Sov. Phys. JETP 20, 1307–1314 (1965)
P. B. Corkum and F. Krausz, Attosecond science, Nature Physics 3, 381 (2007).
D. Shafir et al., “Resolving the time when an electron exits a tunneling barrier, Nature 485, 343 (2012).
B. K. McFarland, J. P. Farrell, P. H. Bucksbaum and M. Gühr, High Harmonic Generation from Multiple Orbitals in
N2, Science 322, 1232-1235 (2008).
5. Olga Smirnova et al., High harmonic interferometry of multi-electron dynamics in molecules. Nature 460, 972-977
Oren Cohen, Technion
Generation of high-order harmonics with controlled polarization: from linear through
elliptic to circular polarization
We demonstrate, theoretically and experimentally, that the polarization of high-order harmonics driven by
counter-rotating elliptically-polarized bichromatic pulse are fully controllable: from linear through elliptic to
circular polarization. We also observe new selection rules.
Barak Dayan , Weizmann Institute of Science
Strong Laser Physics (cont.)
Avner Fleischer , Technion
Where does a photo-electron appear in the continuum ?
In tunnel ionization of atoms by strong laser fields it is known that the electron appears in the continuum at
the outer turning point of the tunnel barrier. It is reasonable to assume that a reduction in the number of
photons which participate in the ionization will shift the appearance location of the ionized electron towards
the origin. Here we verify this assumption and suggest a measurement that could reveal where an electron
appears in the continuum. As a result of interferences between the electronic wavelets which are released into
the continuum, the photoionization rate of atoms is modified by the presence of a weak dc field in an
oscillatory manner. By measuring the phase of the oscillations, the average appearance position of the photoionized electron in the continuum can be retrieved
Vitali Averbukh, Imperial College
New ideas for attosecond time-resolved spectroscopies of electron hole dynamics
In this talk I will present two of our recent ideas for new attosecond time-resolved measurements of electron
hole dynamics:
* Single-photon laser enabled Auger spectroscopy
* High-harmonic generation spectroscopy of Auger-type transitions
Unlike attosecond streaking, the proposed spectroscopies do not rely on photo- or secondary electron emission
and are applicable to ultrafast electronic processes involving bound-bound transitions, such as electron
correlation-driven charge migration. We simulate the new attosecond spectroscopies using both model and
ab initio methods. Specific applications include hole migration in glycine, atomic Auger and Coster-Kronig
decays as well as quasi-exponential dynamics of molecular orbital breakdown in trans-butadiene.
Alexandra Landsman ,Princeton
Strong Laser Physics (cont.)
Zohar Amitay ,Technion
Coherence Generation and Control in the Free-to-Bound Binary Reaction of Hot Atom-Atom
A long-standing yet unrealized dream since the early days of coherent control is coherent control of general
photo-induced bimolecular chemical reactions. Realizing this dream will create a new type of photochemistry
that coherently photo-induces new chemical reactions and selectively controls the yields and branching ratios
in existing and new photo-reactions.
As significant steps along this direction, we present here experimental and theoretical control results for the
free-to-bound binary reaction of multiphoton femtosecond photoassociation of thermally hot atoms (hot fsPA). In this process, a (shaped) femtosecond pulse induces a chemical bond formation between the colliding
hot atoms via a free-to-bound multiphoton transition, and generates a bound excited diatomic molecule.
Coherent control of hot fs-PA is, on one hand, an important model for femtosecond control of simple binary
photo-reactions and, on the other hand, an essential prerequisite for femtosecond control of more complicated
binary reactions using extended multi-pulse schemes that employ the fs-PA as a first step.
Our results are presented for fs-PA of hot magnesium atoms into bound magnesium dimer molecules, i.e.,
Mg+MgMg2*, with the thermal ensemble of Mg atoms held at a temperature of 1000 K. The process is
induced by intense (shaped) femtosecond pulses (70-fs transform-limited duration; 840-nm central
wavelength) in the strong-field regime.
The experimental results, accompanied by a comprehensive theoretical model and calculations, include the
first-time demonstration and observation of the following: (i) The formation of diatomic molecules with
vibrational and rotational coherence in the process of photoassociation. This corresponds to the generation of
the photo-associated molecules in a coherent superposition of rovibrational states. Generating vibrational
coherence is essential in order to utilize hot fs-PA as a basis for chemical reaction coherent control, since the
vibrational degrees of freedom determine the fate of bond making and breaking. (ii) Inducing the fs-PA
process via multiphoton excitations. This significantly extends the variety of molecular species and reactions
that are candidates for chemical reaction control. (iii) Femtosecond coherent control of the photoassociation
process. In the first part, the multiphoton photoassociation probability (yield) is coherently controlled by
linearly-chirped shaped pulses, and a very strong enhancement is achieved with a proper positive chirp. Then,
PA yield control that involves also intermediate field-free coherent rovibrational dynamics is demonstrated
using shaped pulses with triangular spectral phase patterns (in the shape of V or).
Pananghat Balanarayan, Technion
High-frequency strong laser physics and chemistry: Linear Stark effect for atoms and strong chemical
bond in rare-gas dimer
Current trends in laser technology have reached the regime of studying atoms stabilized against ionization,
going beyond the perturbation theory.
In this work, properties of a laser-dressed sulfur atom are examined in this stabilization regime. The electronic
structure of a sulfur atom changes dramatically as it interacts with strong high frequency laser fields.
Degenerate molecular-like states are obtained for the ground state triplet of the laser-dressed sulfur atom for
high-frequency and moderate intensity laser parameters. The degenerate ground state is obtained for a laser
intensity which is smaller by more than one order of magnitude than the intensity required for hydrogen
atoms due to many electron screening effects. An infinitesimally weak static field mixes these degenerate
states to give rise to asymmetric states with large permanent dipole moments.
Hence, a strong linear Stark effect rather than the usual quadratic one is obtained.
The van der Waals complex of the helium dimer has a very small binding energy and a bond distance of 52
Angstrom. The free field potential has been found to support a single vibrational bound state, that has been
detected in diffraction grating experiments. With a high intensity and high frequency laser, stable helium
dimer molecules with a binding energy of 12.5 eV, which are much stronger than conventional molecular
hydrogen bonds, are produced. The bond distance of the dimer produced by the strong laser field is equal to
2.01 Angstroms. All these effects are seen in the high frequency high intensity regime of the laser within the
zeroth order Kramer-Henneberger states of the laser dressed atoms and molecules.
Ido Gilary, Technion
Time asymmetric state exchange mechanism
Dynamical control in optical lattices
Immanuel Bloch ,Max Planck Institute for quantum optics
From Rydberg Crystals to Bound Magnons - Probing the Non-Equilibrium Dynamics of Ultracold
Atoms in Optical Lattices
Ultracold atoms in optical lattice form an ideal testbed to probe the non-equilibrium dynamics of quantum
many-body systems. In particular recent high-resolution imaging and control techniques allow to probe
dynamically evolving non-local correlations in an unprecedented way. As an example, I will focus in my talk
on the dynamical excitation of spatially ordered Rydberg structures that are formed through laser exctiation
from ground state Mott insulating atoms. In addition, I will show how single-spin and spin-pair impuritites
can be used to directly reveal polaron dynamics in a strongy interacting superfluid or the bound state of two
magnons in a quantum ferromagnet. New atom interferometric schemes to directly probe the Green's function
of a many-body system through the impurity dynamics will be discussed.
Hossein Sadeghpour, ITAMP, Harvard
Anomalous heating noise in ion traps: what is it and can it be mitigated?
Dynamical control by external fields (cont.)
Uri Peskin, Technion
Coherently controlled conductance in single molecule junctions
Molecular junctions, in which a single molecule is coupled to two macroscopic electrodes, provide a unique
scenario to study charge and energy transport through molecular systems in both equilibrium and nonequilibrium conditions. In the talk, we shall discuss new possibilities for utilizing the unique transport
properties of molecular junctions for electronic and energy-conversion devices. Having a molecule as the
‘bottle neck’, the characteristic length and time scales imply that coherent (phase conserving) transport
dominates the transport properties, and suggests that these systems can be coherently controlled. In particular
we shall focus on a molecular coherent ‘electron pump’ which converts radiation field into directed electronic
current. The principle of operation will be analyzed theoretically under conditions ranging from a sudden
pulse to cw excitation, and the principles of coherent control by the radiation field in the presence of decoherence by the leads will be outlined. Finally, we shall propose an experimental design for measuring field
induced dynamics in molecular junctions (with sub pico-second resolution) using steady state current
Johannes Feist , University Autónoma de Madrid, Spain
Condensation of ultralight particles: towards a quantum degenerate gas of plexcitons
Condensation of bosons, where a single quantum state is macroscopically populated, is a fascinating
phenomenon spanning diverse areas of physics. It lies at the heart of superfluidity and superconductivity, as
well as the Bose-Einstein condensation of ultracold dilute atoms, which was achieved close to twenty years
ago. An important goal is to find systems in which condensation takes place at higher temperatures, even at or
above room temperature. Bosonic quasi-particles in solids are excellent candidates due to their light effective
masses, and signatures of condensation have been observed for semiconductor excitons and exciton-polaritons
at temperatures of a few K, as well as for magnons and cavity photons at room temperature.
I will present and discuss experimental signatures of quantum condensation of plexcitons at room
temperature. Plexcitons are bosonic quasiparticles formed by strong coupling between organic molecule
excitons and surface plasmon polaritons (quasi-bound modes confined on a subwavelength scale at metaldielectric interfaces). Our system consists of a periodic array of metallic nanorods covered by a polymer layer
doped with an organic dye. By increasing the plexciton density through optical pumping, we observe
signatures of thermalization and condensation, such as the emergence of Bogoliubov-Goldstone modes,
despite the nonequilibrium character of this driven and dissipative system.
Ed Narevicius, Weizmann Institute of Science
Chemistry of the Quantum Kind
There has been a long-standing quest to observe chemical reactions at low temperatures where reaction rates
and pathways are governed by quantum mechanical effects. So far this field of Quantum Chemistry has been
dominated by theory. The difficulty has been to realize in the laboratory low enough collisional velocities
between neutral reactants, so that the quantum wave nature could be observed. We will discuss our merged
neutral supersonic beams method that enabled the observation of clear quantum effects in low temperature
reactions. We observed orbiting resonances in the Penning ionization reaction of argon and molecular
hydrogen with metastable helium leading to a sharp increase in the absolute reaction rate in the energy range
corresponding to a few degrees kelvin down to 10 mK. Our method is widely applicable to many canonical
chemical reactions, and will enable experimental studies of Quantum Chemistry.
Quantum chaos and complexity in decay, emission, and
relaxation processes
Patrick Sebbah ,Langevin Institute
Emission control of a random laser : Turning a random laser into a tunable singlemode laser by active
pump shaping
We present an innovative mirrorless optofluidic random laser where the optical cavity has been replaced by a
random scattering structure. We achieve emission control at any desired wavelength by iteratively shaping the
optical pump profile. This method is proposed to explore pump-induced exceptional points in lasers.
Ulrich Kuhl , Marburg University
Experimental realization of Resonant assisted tunneling in open systems
In quantum mechanical billards with a mixed phase space direct tunneling from regular islands to the chaotic
sea is well known [1],[2]. For quantum mechanical maps it was shown, that the tunneling rates are determined
by another effect the so-called resonance-assisted tunneling [3], which is an indirect process whire another
stable island. To verify this theory a Cosin-shaped microwave resonator with absorbers was designed, where
the absorbers are located in such a way that the chaotic sea is replaced by the continuum. The experimental
results are in good agreement with the theoretical predictions
Ofir Alon ,Haifa University
Many body decay by tunneling and wave chaos in BEC
Novel phenomena in control of light matter interactions
David Tannor, Weizmann Institute of Science
Phase Space Approach to Quantum Mechanical Calculations for Large Systems: Application to
Attosecond Electron Dynamics
We present a method for solving both the time-independent and time-dependent Schrödinger equations based
on the von Neumann (vN) lattice of phase space Gaussians. By incorporating periodic boundary conditions
into the vN lattice we solve a longstanding problem of convergence of the vN method. This opens the door to
tailoring quantum calculations to the underlying classical phase space structure while retaining the accuracy of
the Fourier grid basis. In the classical limit the method reaches the remarkable efficiency of 1 basis function
per 1 eigenstate. The method can be combined with a wavelet-like scaling of the basis functions, which is
particularly useful for Coulombic potentials. We illustrate the method by calculating the vibrational dynamics
of polyatomic systems as well as simulating attosecond electron dynamics in the presence of combined strong
XUV and NIR laser fields .
.1A. Shimshovitz and D. J. Tannor, Phase Space Approach to Solving the Time-independent Schrödinger
Equation, Phys. Rev. Lett. 109, 070402 (2012.)
.2N. Takemoto, A. Shimshovitz and D. J. Tannor, Phase Space Approach to Solving the Time-dependent
Schrödinger Equation: Application to the Simulation and Control of Attosecond Electron Dynamics in the
Presence of a Strong Laser Field, J. Chem. Phys. 137, 011102 (2012) (Communication.)
3. A. Shimshovitz and D. J. Tannor, Phase Space Wavelets for Solving Coulomb Problems, J. Chem. Phys.
137, 101103 (2012) (Communication).
Sandy Ruhman, The Hebrew University of Jerusalem
Time asymmetric dynamics in biological systems
Cover page and abstracts of talks |
597f31220bacc915 | Real-time Earth and Moon phase
Sunday, March 28, 2010
Same old 'SS Edmund Fitzgerald' tragedy. Nothing new!
This news from the Chicago Sun-Times this morning, entitled "New wave hits 'Edmund Fitzgerald'", naturally caught my attention:
The legend lives on from the Chippewa on down, but singer Gordon Lightfoot says he plans to change the lyrics to his song, the "Wreck of the Edmund Fitzgerald," after researchers concluded that a gigantic, 50-foot rogue wave -- not human error -- was responsible for sinking the ship.
The Edmund Fitzgerald left Superior, Wis., on the evening of Nov. 9, 1975, bound for Zug Island, near Detroit. The next day, it encountered a fierce storm and sank. Twenty-nine lives were lost -- the greatest disaster in the history of the Great Lakes. The U.S. Coast Guard concluded that the boat sank because the crew left the cargo hatches open, allowing the holds to fill with water.
In the show "Dive Detectives," a new series for History Television, a diving team deployed wave-generating technology to simulate the conditions faced by the Edmund Fitzgerald. The tests demonstrate how the force of the freak wave, crashing down on the midsection of the boat -- already low in the water because of its heavy cargo --- might have caused it to split in two.
Lightfoot said the conclusion is "definitive." Instead of singing "at 7 p.m. a main hatchway caved in," he'll sing "at 7 p.m. it grew dark, it was then, He said, 'Fellas, it's been good to know ya.' " Scripps Howard News Service
I am a little underwhelmed by this news story. I have never thought the tragedy of Fitzgerald in 1975 was due to human error, since I always regard the official report with whatever was in it was just perfunctory, duty-bound speculations not worth of much undue attention. Now this "new" wave-generating technology simulation is simply another speculation. We don't know what had happened, and no one can recreate the condition the Fitz encountered. Remember that there were other ships went through the same condition that night safely. So we did not know what was happened to SS. Edmund Fitzgerald that night, now over 35 years later, we still don't know what had happened that night. Mr. Lightfoot can certainly entitled to change his lyrics as he wishes. The fact is that there is nothing new in this new simulation/speculation effort. And there is just nothing we can do about it either! By the way there was a fabulous hindcast study by NOAA Weather Service scientists not long ago, showing the time and location where the Fitz lost was the worst spot at the worst time as far as the storm waves can be hindcasted for that night. That should be sufficient evidence showing that Fitz's tragedy was caused by waves more than anything else. Gorden Lightfoot's new lyric "at 7pm it grew dark" is really an under statement. I really could not see this new effort as reported, while commendable, had added anything new to the knowledge base!
Reading II on Palm Sunday of the Lord’s Passion
Christ Jesus, though he was in the form of God,
did not regard equality with God
something to be grasped.
Rather, he emptied himself,
taking the form of a slave,
coming in human likeness;
and found human in appearance,
he humbled himself,
becoming obedient to the point of death,
even death on a cross.
Because of this, God greatly exalted him
and bestowed on him the name
which is above every name,
that at the name of Jesus
every knee should bend,
of those in heaven and on earth and under the earth,
and every tongue confess that
Jesus Christ is Lord,
to the glory of God the Father.
(Is. 50:4-7)
Wednesday, March 24, 2010
The Louis Majesty case
When freaque waves are encountered, we don’t usually know what was happening. News media reporters mostly put together comments and eyewitness accounts, but clearly no one is capable of sorting out important key facts at any rate. The freaque waves world has been quiet for some time, but in early March an encounter by a cruise ship in the Mediterranean made world wide news. According to Google there were at least 1300 news items have been written and published all around the globe.
An example of a typical account can be represented by this AP report on March 3:
ATHENS, Greece (AP) - Greek and Cypriot officials say 26-foot waves have crashed into a cruise ship with nearly 2,000 people on board off France, smashing glass windshields and killing two passengers.
Another six people suffered light injuries, a Greek coast guard statement says.
It says the accident occurred near the French Mediterranean port of Marseilles on Wednesday as the Cypriot-owned Louis Majesty was sailing from Barcelona to Genoa in Italy with 1,350 passengers and 580 crew.
The victims were only identified as a German and an Italian man.
Louis Cruise Lines spokesman Michael Maratheftis said the ship was hit by three "abnormally high" waves up to 26 feet (8 meters) high that broke glass windshields in the forward section. It is heading back to Barcelona.
That’s still about all we know at this time even after all these days days since the happening. Note that two days after the encounter on March 5, this Wired Science article, attempted to put more analysis and science into it but did not really succeed! They merely put in plenty of jargons, extra known facts, and even talked to an oceanographer which is commendable but not really clarify things up. Mainly the article tried to imply that the wave encountered by the cruise ship, Louis Majesty, is not very large and it does not fit the “official definition” of a freaque wave. Here’s my comment for the “experts”: When a wave caused damage and casualty, it is a freaque wave, doesn’t matter what “official definition” is. (Freaque waves may have some general indications or guidelines, but no universally accepted “official" definitions yet! And armchair analysis can cause more confusion!)
The article from World News Australia, also on March 5, attempted to do the same kind of analysis and reporting as the Wired report, has this statement: “Experts say the waves are almost always generated by storm-related winds . . . “ Now it is true that ocean waves are always generated by winds. But if they are trying to imply that freaque waves are also wind generated – they are wrong! Freaque waves can happen during storm, hurricanes, or typhoon, or when there is no wind! That’s why it’s freaque!
Time magazine has an article, asked the question “How do ‘rogue waves’ work?” that actually made an accurate statement not everyone would willing to accept but no one able to dispute: “Scientists still don't know exactly how rogue waves occur, nor do they know how to predict them.” Among the thousands of articles already written and published on this now worldwide well known tragic case, it is encouraging to see Prof. Paul Taylor of Oxford contributed an article on CNN England entitled "Giant waves: Tall tales or alarming facts?" on March 5, complete with the famous 19th century wave painting by Hokusai. I guess as a member of the general public, I am particularly appreciate the interpretation of the theoretical phenomenon Prof. Taylor gave in terms that everyone should readily understand even without knowing what nonlinear Schodinger equation is:
The simplest model to reproduce the basic properties of the simulations is the nonlinear Schrödinger equation -- an equation belonging to an area of applied mathematics investigated extensively over the last 40 years.
The basic process is related to the local concentration of energy that occurs when large waves form. Large waves move faster than small ones, causing a group of large waves to contract along the direction of propagation.
Like squeezing a tube of toothpaste, the energy is forced out sideways -- extending the length of the wave crests, and appearing to an observer as "a wall of water."
The Louis Majesty case is a tragedy with two lost lives unfortunately. But hopefully the well known case can help to remind everyone that we don't really know what was going on out there when freaque waves hit. More research and more measurements are urgently needed! |
78e29dc431733831 | Thomas D. Le
Quantum Mechanics and the Multiverse
Part I - Quantum Mechanics
A Science-Fiction Episode
Quantum Mechanics and Classical Physics
The Nature of Waves
The Ubiquity of Wave Phenomena
Properties of Waves
Wave Characteristics
Wave Properties and Phenomena
The Nature of Light
The Nature of Light
The Corpuscular Theory
The Wave Theory
The Electromagnetic Theory
The Quantum Theory
Crucial Experiments and Principles
The Double Slit Experiment
The Photoelectric Effect
The Heisenberg Uncertainty Principle
Measurements – Probabilities Versus Determinism
The Measurement Problem and Schrödinger’s Cat Paradox
The Multiverse
The Modern Concept of the Multiverse
Hugh Everett III’s Relative States
David Deutsch’s Multiverse
Max Tegmark’s Parallel Universes
Internet Resources
The by-now familiar expressions parallel universes and the multiverse must be as puzzling as they are controversial. Is there any evidence for the existence of universes other than the one in which we live, and whose secrets we are still a long way from understanding?
First off, to most of us there is nothing certain about an intangible universe out there. And this is just as well. If we take quantum physics seriously, however, and we must, given its astounding ability to explain phenomena and behavior of atomic structures, and subatomic particles, then we also see quantum physics as opening up all sorts of questions about the nature of reality.
At the macroscopic level, at which we operate every day, classical physics works well. Gravity is reassuring by keeping us glued to our earth, telling us how much we weigh, and preventing our furniture from floating away. The momentum of a body is determined by its mass and velocity. Water boils at a fixed temperature under a fixed atmospheric pressure. Everything seems to obey physical laws as discovered by classical physics over the past two thousand years.
In came quantum mechanics about a hundred years ago. It introduces the idea of probability where classical physics from Newton up to Einstein sees nothing but determinism. Quantum mechanics deals with microstructures such as atoms and subatomic particles, and finds that little there is fixed or determinate. Electromagnetic waves have particle properties, and photons exhibit wave-like characteristics. Furthermore, it is impossible to measure simultaneously the position and momentum of a particle to an arbitrary degree of accuracy, for increasing the accuracy of one measurement necessarily entails decreasing the accuracy of the other. Heisenberg’s uncertainty principle rules the microscopic matter world while the deterministic laws of classical physics run into trouble there.
Together quantum mechanics and Einstein’s relativity lay the foundation of modern physics. With quantum mechanics opening up untold possibilities, a number of workers went beyond to posit the existence of more than one universe. In 1957 Hugh Everett introduced the relative-states formulation of quantum mechanics, leading to the many-worlds interpretation (MWI). Since then other physicists have followed up with tantalizing hypotheses regarding the possibility, if not probability, of parallel universes. Fred Wolf speculates in Parallel Universes that there are an infinite number of universes inside our heads. David Deutsch, in The Fabric of Reality, offers provocative ideas of other worlds and proposes to use them in quantum computers. Max Tegmark discusses four levels of the multiverse and offers evidence for each.
If reality is everything there is, then what is the nature of reality? The visible as well as the intangible? For if worlds exist in parallel, one in which you and I live, and many in which your doubles and mine do, we must reconsider the nature of reality. So far, the idea of the multiverse is still wallowing in its minority status, and is deemed too radical by most physicists. Why bother when we can’t even grapple with the more practical and theoretical issues raised by quantum mechanics? Yet, the allure of the multiverse speculation remains irresistible.
This paper includes parts of classical mechanics and quantum mechanics that I deem useful for an intelligent discussion of the theories of the multiverse, and aims for a bird’s eye view. It assumes little prior acquaintance with either topic.
It is my fervent hope that the reader will share with me the excitement and fascination that quantum mechanics and the multiverse hypotheses have to offer.
Thomas D. Le
22 July 2004
To Quantum Mechanics and the Multiverse - Part II
Chapter 1
1.1 A Science-Fiction Episode
an infinite series of times, in a dizzily growing, ever spreading network of diverging, converging and parallel times. This web of time—the strands of which approach one another, bifurcate, intersect or ignore each other through the centuries—embraces every possibility. We do not exist in most of them. In some you exist and not I, while in others I do, and you do not, and in yet others both of us exist. In this one, which chance had favored me, you have come to my gate. In another, you, crossing the garden, have found me dead. In yet another, I say these very same words, but am in error, a phantom.
This excerpt from Jorge Borges’ “The Garden of the Forking Paths” taken from Ray Bradbury’s 1956 “The Silver Locusts: A classical collection of Science Fiction stories taken from the Martian Chronicles” illustrates the bizarre and highly counterintuitive world of parallel worlds. (Wolf: p. 42)
In parallel universes strange things happen. You and your doubles may have by and large identical lives and pasts, except for totally different behaviors all occurring at the same time. Or they may have totally different lives. Does reality consist of one universe or many? And how many universes are there? Do they exist simultaneously or at different times at different locations? Is it possible to communicate with them? Is there any scientific evidence for parallel universes?
The idea of strange parallel worlds touching our reality has been described by science fiction writers for quite some time, ever since the beginning of the genre. But for the last fifty years or so the realm of parallel universes has leaped from science fiction into serious scientific discussion.
The interesting fact is that the modern (by this I mean non-metaphysical) multiverse idea came to the scientists themselves from experiments in quantum mechanics, during which the particles behaved in a rather intriguing fashion. We will see these experiments in more detail later in Chapter 3.
To pave the way for a better grasp of the concept of the multiverse, we will review certain background information on relevant aspects of physics. Some topics will receive more detailed treatment than others; the degree of elaboration depends in large part on its contribution to an understanding of the concept of the multiverse. Throughout this work sparing use of formulas serves the purpose of clarifying relationships, and may be skipped without loss of continuity.
1.2 Quantum Mechanics and Classical Physics
At the beginning of the twentieth century physicists were concerned about the absorption and emission of radiant bodies, and the discrepancies between experimental observations and the electromagnetic radiation theory. The German theoretical physicist Max Planck (1858-1947), finding that at higher frequencies radiated energy decreases instead of increasing as predicted by the electromagnetic theory, hypothesized that emitted energy propagates, not continuously but in individual packets (quanta), now called photons, the magnitude of which depends on the frequency f and a constant h, later named Planck’s constant in his honor, which is one of the fundamental constants of nature, in the same sense that the speed of light is a fundamental constant. Whereas in everyday life we know electricity, energy, and matter appear to be continuous, and can be described by deterministic classical (Newtonian) laws of motion, at the microscopic level, matter is made up of atoms, which contain nuclei and electrons with their own energy levels. These discrete particles have rest masses, and electromagnetic waves of frequency f are streams of photons with energy E equal to hf. Thus, matter at the macroscopic and microscopic levels is seen to be different, and requires a new theory to explain, quantum theory.
Quantum mechanics differs from Newtonian, or “classical” mechanics in many ways. In Newtonian mechanics the future history of a particle is completely determined by its initial position, its momentum, and the forces that act upon it. Observations in the macroscopic world bear out the predictions of Newtonian mechanics with reasonable accuracy. Classical laws are causal and thus completely deterministic. According to Newton’s laws of motion, the motion of particles is determined exactly once the initial position and velocity of each particle are given. The trajectory of an electron is determined by (1) its position at any instant of time, (2) its velocity at that time, and (3) the value of the (electric and magnetic) force F at all times. The force F of an electrical particle is determined by the electric and magnetic fields, which can be calculated by using Maxwell’s equations (Section 3.4 below). Thus the motion of a particle, in classical physics, is determined for all time. These ideas of classical theory imply that electrons in a light beam of a given intensity gain energy at a continuous rate, which can be derived from the light intensity and from the initial conditions of the electrons. Experiments show, however, that the process of energy transfer is discontinuous and not governed by the deterministic laws of classical physics. Only the probability of the process is determined.
The concept of probability is not new to classical theory. In thermodynamics we measure the temperature, pressure, and volume of a given system. However, near the critical point these quantities no longer obey the equations of state exactly, but fluctuate around a mean value that is predicted by the equation of state, i.e., the relation between the temperature, the pressure, the volume, and the mass of a gas. Thus, the deterministic laws of thermodynamics break down and have to be replaced by probability laws. It was suspected that hidden variables had come into play that we knew little about. Yet all studies of energy transfer at the microscopic levels have failed to point to any existence of such hidden variables. In fact, there are theoretical reasons why hidden variables are unlikely to exist.
In quantum mechanics, the relationship between a particle and its future state is anything but certain. In the photoelectric effect experiment (Section 4.2 below), as well as in a wider range of other experiments, no known laws exist that can predict whether any one quantum of incident light will be absorbed by the metal plate, and if it is absorbed, precisely when and where. If a beam of light contains a large number of quanta, it is possible to predict, from the intensity of the light, the mean number of particles absorbed in any given region. In this sense, quantum laws appear to predict the probability of an event, but not its occurrence.
One of the vexing questions confronting physicists is the continuity of motion. In a naïve concept of motion, and from daily experience, motion and matter appear continuous. An object at rest, fixed in position, is not moving. Presumably we can determine its position with perfect accuracy. When that object moves, we conceive of its motion as a succession of fixed positions, similar to a series of frames in a motion picture. However, this concept does not include an essential property of motion, which is that the object must cover space continuously as time passes. It just shows the results after the motion has taken place. To picture the continuous coverage of space over time, we must reduce time to a very small value, and reduce the indefiniteness of position to a proportionately small value. But we cannot reduce the indefiniteness to zero and still obtain the picture of a moving object. We cannot picture an object at a definite point in space without picturing it as having a fixed position in space. For a picture of motion to be captured we must allow for a small blur in our view of position, just like the blurred picture of a moving car suggests motion better than a sharp picture.
The above simple idea of motion is similar to the one suggested by quantum theory. In quantum theory, momentum, and hence velocity, can have an exact meaning only in the context of a wave-like structure in space. The French physicist Louis de Broglie (1892-1987) proposed in 1924 that matter possesses wave as well as particle characteristics, and the existence of the de Broglie waves was experimentally established in 1927. That matter has wave characteristics is a revolutionary concept. We can easily conceive of sound or light as a wave. We can also easily conceive of a car or a ball as a particle. But a matter wave is a different story. How can a car or a ball be a wave? In the next paragraph we find the de Broglie wavelength to be λ = h / mv, i.e., the ratio of the Planck’s constant h to the product of the particle’s mass m and its speed v. For example, the de Broglie wavelength of a 0.20 kg ball traveling with a speed of 15 m/s is 2.2 x 10-34 m. This incredibly small wavelength totally escapes detection at the macroscopic level of our daily experience. Only when we get down to the particle level do we see the effect of these minuscule quantities. Only at this level does it make sense to speak of matter waves.
De Broglie defines momentum p of a particle as the product of its mass m and velocity v, or p = mv. Since the wavelength of a photon of light is λ = h / p, where h is the Planck’s constant (Section 3.5 below), which de Broglie considers as a general case, the momentum of a photon of light of wavelength λ is p = h / λ, and the de Broglie wavelength is λ = h / mv, which implies that the greater the momentum is, the shorter is the wavelength. We expect the wave propagation of a body to be a wave group or wave packet, as thus described by de Broglie, which consists of an infinite group of superposed waves of slightly varying wavelengths and momenta, with phases and amplitudes such that they interfere constructively (See Section 2.2.2 below on interference). Quantum theory visualizes that the average position of such a wave packet moves from one point in space to the next at a fairly definite velocity. The motion of a wave packet is thus analogous to the motion of a particle, which covers a range of positions at any instant with the average position changing uniformly with time. We cannot picture a particle as having a definite position and a definite momentum at the same time. Such a particle cannot possibly exist, according to quantum theory.
We can picture an object at rest at a fixed position at a given time, which at a short finite time later might be somewhere else regardless of velocity, but the uncertainty principle (Section 4.3 below) also tells us that such an object has a highly indefinite momentum. The more precisely we define the wave packet, the more rapidly it spreads, and the less able we are to describe continuously its motion. What this means is that we can arrive at a continuous position of the motion only if the position is indefinite, and at a picture of a particle in a definite position only if we do not try to picture it in continuous motion.
This leads us to the probability concept. Quantum theory entails the rejection of complete determinism in favor of statistical trend. The complete determinism of classical theory springs from the fact that, given the initial position and velocity of each particle in the universe, their subsequent behavior is determined by Newton’s laws of motion regardless of time. In quantum theory, Newton’s laws cannot apply to each individual electron because its position and momentum cannot be determined simultaneously with perfect accuracy. Suppose we want to aim an electron at a certain spot. First we have to find where that electron is now, then give it the necessary momentum that causes it to move to the spot. The uncertainty principle says this cannot be done. Therefore the concept of complete determinism is not applicable to a quantum theoretical description of the electron.
But quantum theory does follow statistical laws. In a series of observations, given an initial position determined as accurately as the theory permits, we measure the position of a particle after a time lapse of Δt. The positions obtained fluctuate with each measurement, but remain clustered about a mean value determined by momentum. If an electron is aimed at a point by controlling its momentum, we can obtain a replicable pattern of hits near that point. In order to change the position of the center of this pattern, we must change the momentum of the system. Even if the momentum is precisely determined, it is impossible to predict where the electron will strike. Hence, in quantum theory, as in ordinary life, only the probability of an event is determined, but its outcome cannot be.
We will see later that interpreting quantum theory spawns hypotheses about the multiverse.
Chapter 2
The Nature of Waves
2.1 The Ubiquity of Wave Phenomena
We devote some time to waves, as they are relevant to a discussion of the multiverse. Sound waves, electromagnetic waves, and water waves share a common set of properties and behaviors, which pose tantalizing questions about the universe. Waves impinge on our senses in our everyday activities to such an extent that it is impossible to conceive of the universe without them. Radio, television, satellite communication, the Internet, music, art, science as we know them simply do not exist in the absence of waves. Without electromagnetic waves, we cannot see, feel heat, communicate via radio or TV, have light, or cook. Without sound waves we cannot hear, and certain animals, such as bats, cannot navigate.If you think about it, even the universe cannot exist. Even subatomic particles behave like waves under certain circumstances, which we shall see later.
2.2 Properties of Waves
A great deal of the physical phenomena may be described as waves. When you throw a rock in a pond, it makes ripples that propagate around the point of impact in successive concentric waves of peaks and valleys that spread outward. If you spread iron filings around a bar magnet, they tend to congregate on the two ends of the magnet, called poles, and fan out from the poles in wavelike patterns. Magnetic field lines exit at the north pole of the magnet and enter at the south pole. The Earth itself, like many planets, is a huge magnet. The same magnetic field behavior occurs with an electric field. When electricity flows through a metal coil, it creates a magnetic field similar to one observed in a natural magnet. When an electric charge accelerates, it radiates an electromagnetic wave. Electromagnetic (EM) waves are waves of fields, unlike waves of matter such as waves on water or on a rope. Sound too has wave properties. But while electromagnetic waves can travel in a vacuum, sound needs a medium, such as air, water, or even a solid, to propagate.
2.2.1 Wave Characteristics
This section describes characteristics common to waves, regardless of their origin or nature.
Wave characteristics. One of the frequently mentioned characteristics of waves is wavelength (represented by λ), which is the distance in meters from the crest of one wave to the crest of the adjacent one.
Amplitude refers to half the vertical distance between the crest and the trough of a wave. The energy transmitted by a traveling wave per unit of time depends on the amplitude, frequency and mass of the particles of the medium at the source of the wave disturbance. The rate of transfer of this energy, which is part potential and part kinetic, is proportional to the square of the wave amplitude and to the square of the wave frequency. Surface waves on water decrease in amplitude as they propagate from the source in concentric circles. Since the energy contained in each advancing crest remains unchanged, as the crest expands, the energy per unit length of crest must decrease, and the wave amplitude must also diminish. The dissipation of the energy of a wave system as it travels away from its source resulting in the decrease in amplitude is called damping.
A wave whose particles’ displacement is perpendicular to the direction of the wave’s movement is called a transverse wave. All electromagnetic waves are transverse waves. A wave, such as a sound wave, whose particles move in the same direction as the wave’s path is known as a longitudinal wave. Water waves have both transverse and longitudinal characteristics.
Frequency (f) is the number of cycles (or crests, troughs, oscillations) per second, expressed in Hertz (Hz). The relation between frequency and wavelength is captured in the equation: f = v / λ, or in the case of electromagnetic waves, f = c / λ, where v is the wave velocity, and c is the speed of light in a vacuum. Frequency thus varies in inverse proportion to wavelength. Conversely, wavelength varies in inverse proportion to frequency, as indicated by: λ = v / f. Derived from the relation above, the equation v = f λ, called the wave equation, holds true for all periodic waves through all media. As can be seen, the shorter the wavelength is, the higher is its frequency. Also the wavelength λ for a wave disturbance of a given frequency f is a function of its speed v in the medium through which it propagates. As the wave enters the second medium, if its speed decreases, its wavelength decreases proportionately for a given frequency. If the speed increases, the wavelength increases proportionately. Likewise, for a given wavelength, if the frequency increases, the velocity also increases proportionately.
In the electromagnetic spectrum, moving upward from the low frequency end, AM radio range from 0.5 x 106 Hz to 2 x 107 Hz; FM radio and television waves spread from 4 x 107 Hz to 2 x 108 Hz, and microwaves from 109 Hz to 3 x 1011 Hz. Invisible infrared (heat) waves range from 3 x 1011 Hz to 4.3 x 1014 Hz. Within the visible light spectrum, the color red (4.29 x 1014 Hz) has lower frequency than violet’s 7.5 x 1014 Hz. Ultraviolet waves (as those that come from the sun, special lamps and extremely hot bodies) cover 7.5 x 1014 Hz to 1016 Hz, and X rays from 1016 Hz to 3 x 1020 Hz. Gamma (γ) rays range from 1018 Hz to at least 3 x 1023 Hz. Recall that wavelengths are in inverse proportion to frequency. Thus AM radio waves, with the lowest frequencies, have the longest wavelengths ranging from 6 x 104 to 1.5 x 103 cm. At the opposite (high-frequency) end of the spectrum, gamma rays, which are of very high frequencies, have very short wavelengths ranging from 3 x 10-8 to 10-13 cm.
Invisible yet penetrating X rays are produced when a beam of fast-moving electrons from a negative electrode strikes a positively charged metal surface. The electrons abruptly stop, and the consequent negative acceleration results in the radiation of very high frequency electromagnetic waves known as X rays. More energetic than X rays, gamma rays are produced when neutrons and protons rearrange themselves in the nucleus making it radioactive, or when a particle collides with its antiparticle and they annihilate. Since gamma rays penetrate and kill living cells, they are used for irradiating food. Irradiated food is exposed to γ rays from cobalt-60 for 20 to 30 minutes.
2.2.2 Wave Properties and Phenomena. Waves possess properties that manifest themselves in phenomena such as: rectilinear propagation, reflection, refraction, diffraction, interference, superposition, and polarization.
Rectilinear (straight line) propagation occurs in ripple waves that propagate in a line perpendicular to the wave front. Wave fronts can be straight, circular or spherical. Sound, which can be heard around the corner of a building, does not propagate in rectilinear fashion, but light does because light is stopped by the building.
Reflection is the familiar phenomenon in which the wave is turned back as it encounters a barrier across its path of propagation. The mirror, radar, and the reflecting telescope are devices that take advantage of electromagnetic wave reflection.
Refraction. When a wave passes obliquely from one medium to another, as from air to water or glass, its speed may increase or decrease, and in accordance with the wave function its frequency may likewise increase or decrease causing the wavelength to be longer or shorter. This phenomenon is known as refraction. In the case of light, optical refraction results in the bending (change of direction of movement) of light rays as they pass obliquely from one medium to another of different optical density, i.e., the inverse of the speed of light through the medium. Because water has a higher optical density than air, the speed of light slows down (to about 225,000 km/s) when light enters water at an angle, called angle of incidence, formed by the incident ray and the normal plane (perpendicular to the interface between the two media). The bent part of the ray in the second medium is the refracted ray. The angle formed by the refracted ray and the normal is called the angle of refraction.
A measure of refraction of a material is the index of refraction or refractive index, which is the ratio of the speed of light in a vacuum to the speed of light in the material. The refractive index is thus in inverse proportion to the speed of light in the material. Hence a lower light velocity inside the material results in a higher refractive index, and the light bends more. The speed of light in space is slightly faster than it is in the air, so that air has an index of refraction of only 1.00029. Water has an index of refraction of 1.333. Diamond’s highest refractive index at 2.4195 makes it the most suitable material for the play of light when it is cut in facets angled precisely to refract light from one facet to another inside it.
As light enters a denser medium its speed slows down compared to the speed of light in a vacuum or in the medium of origin. The velocity of an electromagnetic wave inside a material varies with the wave’s frequency. In the case of sunlight (or starlight), when it reaches Earth’s atmosphere from the “vacuum” of space, its color spectrum is refracted, or dispersed, in accordance with the frequency of each color. Since the light’s wavelength is in inverse proportion to its frequency, blue light has shorter wavelengths than red. And the velocity of the blue light in glass is less than the velocity of red light. When white light is incident on a glass prism, its speed slows down (to approximately 200,000 km/s for ordinary glass); the blue light with its smaller velocity has a larger index of refraction, and therefore bends more than red. We can say that the index of refraction of a material is proportional to the frequency of the light. Because different colors have different wavelengths, they have different refractive indices and are thus spread out (dispersed) by the prism to form the visible spectrum. Sunlight produces the solar spectrum, which consists of the same array of colors that make up white light. Refraction is responsible for a number of common optical illusions, including rainbows and mirages.
Superposition. It is common occurrence that two or more wave disturbances can move independently through the same medium at the same time without mixing. This behavior makes radio and television possible, since broadcasts at different frequencies can be received separately by different antennas. The same phenomena account for the fact that during a concert our ears can distinguish the sound of a piano from that of the violin while both are playing simultaneously. If two waves have different amplitudes and frequencies, the displacement of a particle produced by one wave may at any instant be superimposed on the displacement produced by the other, and the resultant displacement is the algebraic sum of the separate displacements. This superposition phenomenon is explained by the principle of superposition stated thus:
When two or more waves travel simultaneously through the same medium, (1) each wave moves independently of the others, (2) the resultant displacement of any particle at a given instant is the vector sum of the displacements that each individual wave would give it.
This principle holds true for small displacements of light, other electromagnetic waves and sound waves, but not for shock waves produced by violent explosions.
Diffraction. When a periodic wave in a ripple tank, a water tank equipped with wave generators to study wave behavior, hits a straight barrier with a small aperture, placed perpendicular to the wave’s path, the wave slips through the aperture and creates a circular ripple pattern on the other side of the barrier, centered on the gap. This phenomenon, called diffraction, results from a discontinuity in the path of the advancing wave. If the width of the aperture corresponds to the magnitude of the wavelength, the diffraction pattern is clearly visible. As the wavelength decreases, the spreading of the wave at the edges of the aperture also diminishes. Diffraction does not occur if the wavelength is a tiny fraction of the width of the opening.
Light waves behave similarly when they encounter an obstruction (such as a slit opening, a fine wire, or a pinhole) with dimensions comparable to their wavelengths. Light spreads into spectral colors beyond the obstruction due to interference.
Interference. When two or more waves superpose while moving through the same medium, they produce the interference effect. Sound and light are particularly affected by interference. Consider two waves of the same frequency traveling the same medium simultaneously. Every particle of the medium is affected by both waves. If a particle’s displacement caused by one wave at a given instant occurs in the same direction as that caused by the other wave, the resultant displacement at that instant is the sum of the vectors of the individual displacements, and is greater than the displacement of each taken separately. This effect is called constructive interference.If the displacement caused by one wave is in the direction opposite from that caused by the other, the resultant displacement is still the (algebraic) sum of the individual displacements, but smaller than either of them. The displacement at that instant is in the direction of the larger displacement. This effect is called destructive interference. If two such displacements are equal, there is complete destructive interference, and the resultant displacement is zero. The particle is not displaced but is in equilibrium position at that instant. Extremely high precision measurement techniques are made possible by using interference. The interferometer uses interference to make precise measurements of distance in terms of known wavelengths of light or measurements of wavelengths in terms of known distances.
To illustrate interference, two probes side by side oscillate in unison in a ripple tank to create an interference pattern. Each probe is the center of disturbance that produces a set of concentric waves, which propagate outward, overlap, and interfere with one another. When a crest in one set coincides with a crest of the other set, the resulting crest is twice as high as either. Similarly when two troughs coincide they produce a deeper trough than either. And if a trough of one set coincides with the crest of the other, they cancel each other out, and the water surface is neither raised not lowered.
Polarization. This phenomenon occurs only with transverse waves, such as light and other electromagnetic waves. When a wave vibrates in one plane, e.g., vertical plane or horizontal plane, it is called plane-polarized. A vertical slit placed on the path of a wave will stop it if it oscillates horizontally. A horizontal slit placed on the path of a wave will stop it if it oscillates vertically. We know that EM waves oscillate in two fields, the electric field E and the magnetic field B, perpendicular to E, and therefore are plane-polarized. The direction of polarization of a polarized EM wave is taken as the direction of the electric field vector. Light can be either polarized or unpolarized. Unpolarized light emits light in many planes at once. An incandescent light bulb and the sun emit unpolarized light. Certain crystals such as Iceland feldspar are doubly-refracting because they refract light into two rays. Plane-polarized light can be produced from unpolarized light using a crystal such as tourmaline, which in effect acts as a properly oriented slit through which only one ray of light can pass. Today we use a Polaroid sheet (invented by Edwin Land in 1929), which acts as a series of slits to allow one orientation of polarization to pass through. This direction is the axis of polarization.
The scattering of sunlight, which is partially polarized, explains the color of the sky. Scattering depends on the wavelength λ. It is greatest when the light’s wavelength is comparable to the size of the molecule. The blue color of the sky during the day is explained by the fact that when sunlight hits the atmosphere, the shorter wavelengths of its blue end of the spectrum are scattered more effectively than red light, which has longer wavelengths, by microscopic air molecules and dust particles in the upper atmosphere, giving the sky its blue color. At sunset the sunlight, being low on the horizon, travels the maximum length of atmosphere, where most of the blue light has been scattered away in other directions, leaving behind the red and orange colors. The blue light missing in the sunset has thus become the blue sky somewhere else.
Chapter 3
The Nature of Light
3.1 The Nature of Light
As seen in the previous chapter, light’s characteristics can be accounted for in different ways. Two competing theories emerged, the corpuscular theory advanced by Isaac Newton, and the wave theory proposed by Christian Huygens. The former can explain reflection and rectilinear propagation of particles adequately, but breaks down when it comes to refraction; and the latter does equally well for reflection and refraction, but proves inadequate for explaining rectilinear propagation.
There is more to light than propagation, reflection, refraction, diffraction, dispersion or polarization. Light emits energy, as is well known in the case of sunlight, in the form of heat. Light has electric charges and can produce photoelectron emission from substances it impinges on.
This chapter reviews some of the theories that were advocated to account for all light phenomena and light’s effects. We will see that the study of light has helped to lay the foundation for modern physics.
3.2 The Corpuscular Theory
To Isaac Newton light consists of streams of particles, which he called “corpuscles” that emanate from the source. The particle theory of light can easily explain the straight-line propagation of light by using the analogy of a ball thrown at an extremely high speed. Such a ball, which at high speed conceivably moves in a straight line, represents a particle of light. While sound, a wave phenomenon, can be heard around the corners of a building, light cannot be seen from behind an obstruction. This is the best evidence that light travels in a straight line. The ball analogy can also explain reflection. Steel balls thrown against a smooth steel surface rebound similarly to the way light is reflected.
Refraction presents a tougher challenge to Newton’s particle theory. We can replicate the phenomenon by rolling a steel ball from the upper surface of a box set an angle to the normal to the table surface on which the box rests via an incline that connects them. As the ball reaches the incline, the accelerating force pulls it down across the lower surface. Think of the upper surface as the medium of air, and the lower surface as a more optically dense medium of water. By varying the angle of the incline while keeping a constant rolling speed and a constant angle of the upper surface to the normal, we can illustrate the refractive characteristics of different transmitting media. The ball thus represents the particles of light being refracted, or redirected, as they enter water from the air. To Newton, water attracts light much in the same way that gravity attracts the rolling ball. And the experiment implies that when light enters a medium of greater optical density such as water or glass, its speed increases, in the same way that a ball rolling down an incline accelerates due to gravity. Since light from air actually slows down when it is refracted after entering water at an oblique angle, the corpuscular theory cannot handle refraction adequately.
3.3 The Wave Theory
When a stone is dropped in a pool of water the ripples produced travel outward in concentric waves for a long time after the stone is already at rest at the bottom. To explain the lingering effect of the disturbance long after it was created, which is the basic aspect of wave behavior, the Dutch physicist Christian Huygens (1629-1695) devised a geometric method of finding new wave fronts, and formulated the Huygens’ principle, according to which each point on a wave front may be regarded as a new source of disturbance. From this wave front as new source of disturbance, fresh wavelets, called Huygens’ wavelets, develop simultaneously and spread farther out to form the next wave front, and so on. Spherical wave fronts propagate from spherical wavelets, and planar wave fronts from planar wavelets. The wave theory successfully accounts for wave phenomena such as reflection and refraction of light. The wave theory requires that the speed of light is lower in an optically denser medium, such as water, than in the air. This is why the wave theory stands while the Newton’s particle theory fails. However, the wave theory has difficulty with rectilinear propagation. It cannot explain why light is stopped by an obstruction, prompting Newton to reject it.
In Newton’s day, knowledge of interference and diffraction was absent. Not until 1801 was interference of light discovered. These two phenomena tend to support a wave character, and cannot be explained by the particle theory.
3.4 The Electromagnetic Theory
Electromagnetic waves were predicted by the Scottish physicist James Clerk Maxwell (1831-1879), who hypothesized in 1864 that since a changing magnetic field produces an electric field, a changing electric field should produce a magnetic field. He worked out the mathematical formulas, and showed that electric and magnetic fields acting together could produce electromagnetic waves that travel with the speed of light. He then proposed that visible light was an electromagnetic wave. He determined that the energy of the magnetic field is equal to the energy of the electric field, which is perpendicular to the magnetic field. And both fields are perpendicular to the direction of the electromagnetic wave’s motion. Maxwell’s theory of electromagnetic waves is extremely successful in explaining all known optical phenomena. With the wave theory of light many physicists believed at that time that all known laws of physics had been discovered.
In 1887 the German physicist Heinrich Hertz (1857-1894), experimenting with an electric circuit that generated an alternating current in the lab, found that energy could be transferred from this circuit to a similar circuit several meters away. He was able to show that the energy transfer was effected at roughly the speed of light, and that this energy exhibited standard wave phenomena such as reflection, refraction, interference, diffraction, and polarization. Hertz also showed that light transmission and electrically generated waves are of the same nature. However, many of their properties are different because of differences in frequency.
Electric charges exist in a number of natural objects such as amber or glass. The charge of amber is negative (-), and the charge of glass is positive (+). We also know that like charges repel, and opposite charges (charges with opposite signs) attract. Charged rods may also attract small objects that have a zero net charge. The mechanism responsible for this attraction is called polarization. Electric charges exert forces on one another. For every point in space there is a corresponding force. The magnitude of the force at every point is proportional to its charge. When a charge is placed on a point, the force exerted is indicated by a vector, i.e., a magnitude with a direction. The force per charge is called the electric field. An electromagnetic wave consists of two parts: the electric field, whose magnitude is denoted by E, and perpendicular to it, the magnetic field, whose magnitude is denoted by B.
The electromagnetic wave theory of light treats light as a wave train having wave fronts perpendicular to the paths of the light rays. Electromagnetic waves carry both energy and momentum. It is this energy that warms up the material body that absorbs electromagnetic radiation. Momentum is also transferred to the body, and provides what is called the pressure of light. This pressure keeps stars from collapsing as radiation produced in the interior of the stars exerts an outward pressure that counteracts the gravitational attraction between their inner and outer layers. Momentum p is the product of mass and the speed of light, p = mc.
3.5 The Quantum Theory
Around 1900 the German theoretical physicist Max Planck (1858-1947) was trying to explain the characteristics of magnetic radiation emitted from the surface of hot bodies. Classical electromagnetic theory predicts that emission intensity would increase continuously as radiation frequency increased. At very high frequencies, however, experiments show that the spectral distribution of radiated energy approaches zero rather than infinity as predicted by the electromagnetic theory. Clearly, something is wrong.
Planck hypothesized that light is radiated and absorbed as individual bursts, or quanta, and that the energy emitted by radiation is an integral multiple of a fundamental quantity hf. These quanta are now known as photons. The amount of energy of a photon is directly proportional to the frequency of the radiation, as indicated by the quantum energy equation: E = hf, where E is the energy of a photon in joules (a unit of work of one Newton through a distance of one meter), f is frequency in hertz, and h is the constant expressed in joules seconds. The quantity h, which is a universal constant, called the Planck’s constant (just as the speed of light in a vacuum is a constant), has the value of h = 6.626 x 10-34 j •s.
Planck published his quantum hypothesis in 1901, but received little notice until Albert Einstein, seizing upon the idea, made a definitive explanation of the photoelectric effect in 1905 (Section 4.2 below), which was later supported by more experimental work by among others the American physicists Robert Andrews Millikan (1868-1953), and Arthur H. Compton, and for which he received the Nobel prize in physics in 1921.
According to the quantum theory the transfer of energy between light radiation and matter takes place in discrete units called quanta, whose magnitude depends on the radiation frequency. About 1924 the French physicist Louis de Broglie suggested the dual wave-particle nature of light, and postulated that in every mechanical system, waves are associated with particles. Thence comes the concept of matter waves applied to the investigation of the structure of matter known as wave mechanics to explain objects with the atomic and subatomic dimensions.
As seen above, the photon has energy equal to the product of the Planck’s constant and its radiation frequency, E = hf. Given Einstein’s mass-energy equivalence equation, E = mc2, m is the mass of the particle in motion. When at rest, its mass m0 is smaller than m. If we think of the energy of the photon as equivalent to its energy while in motion, we get hf = mc2. However, this equation implies that the photon has a rest mass. But the photon is always moving with the speed of light c. Thus photons have momentum, which they transfer to any surface they strike. Knowing that momentum p = mc, calculating mc in the previous equation, we get mc = hf / c. Since the speed of light c = f λ, we have mc = h / λ, or λ = h / mc. Just as a photon has its wavelength expressed in terms of momentum mc, the wavelength of any particle with a velocity of v can be expressed in terms of its momentum mv as λ = h / mv. There is considerable evidence of the wave nature of subatomic particles. Accelerated electrons behave like X rays, and the electron microscope is based on electron waves.
In 1923 the demonstrated Compton effect provides conclusive evidence that electromagnetic radiation can exhibit the properties of particles. When X radiation passes through a solid material, such as graphite, part of it is scattered in all directions. The American physicist Arthur H. Compton (1892-1962) found that the frequency of the scattered light is slightly lower than the frequency of the incident light, indicating loss of energy. To understand the behavior of the system, we assume that the incident light is a photon with a certain amount of energy and momentum striking an electron at rest with a mass m and an initial speed of zero. The electron is free to move. After the photon collides with the matter electron, it scatters at an angle θ with respect to the direction of incidence. The electron scatters at another angle Φ. According to the law of conservation of momentum and energy, the energy of the incident photon is equal to the energy of the scattered photon plus the final kinetic energy of the electron, or hf = hf’ + K. The frequency of the scattered photon f’ is less than the frequency of the incident photon f. And the resulting loss of photon energy becomes the kinetic energy of the electron. Since the frequency of the scattered photon is decreased, its wavelength, λ’ = c / f’, increases. The x and y components of momentum are also conserved. Thus, the Compton effect has put the particle nature of light on a firm experimental foundation.
All this discussion must have given us a confusing picture of the nature of matter. Quantum theory has so far shown that at the microscopic level matter is both wave and particle. And throughout this work we will see this duality amply illustrated by both theory and experiment. The wave-particle duality and the notion of probability in quantum mechanics to be introduced later offer evidence that the universe is more complex than anticipated as well as fertile ground for hypothesizing the existence of more than one universe.
Chapter 4
Crucial Experiments and Principles
This chapter discusses certain experiments and principles that are crucial to an understanding of light phenomena and the world of subatomic particles. Such experiments and the theories proposed to account for them lead physicists to pursue the concept of the multiverse.
4.1 The Double Slit Experiment
The dual wave-particle nature of light is one of the most intriguing areas of physics, and the double slit experiment illustrates the seminal character of the study, whose interpretations introduce the even stranger idea of the multiverse.
Light is a Wave Phenomenon. In 1801 the English physicist Thomas Young (1773-1829) proved the wave nature of light with a double slit experiment. In its simple form, the experiment consists in shooting a beam of light through a filter to transmit only one color with a definite frequency. This monochromatic light beam, such as red laser light, passes through a screen with a long narrow vertical slit. Behind the first screen the light illuminates a second screen in which there are two parallel narrow vertical slits very close together, in the order of one-fifth of a millimeter apart, and equidistant from the single slit of the first screen. Finally the light emerges from these two slits to shine on a third, distant screen, about three meters away. This arrangement insures that the two slits in the second screen act as monochromatic, coherent, i.e., having the same phase, source of light, necessary to produce interference.
If light were composed of particles or “corpuscles” as Newton thought, they would pass through the two slits to illuminate the distant screen behind the second screen in a localized pattern.If light is a wave, each slit is a source of new waves (according to Huygens’ principle), similar to water waves passing through two small apertures.
Basically, in this experiment the light from the light source, such as sunlight, passes through the slits of the two intervening screens to strike the third. The first screen has one slit, the second has two slits a fraction of a millimeter apart, and the third or viewing screen is some distance (e.g., 3 m) away. The light propagates in all directions beyond the slits of the first and second screens. Each of the slits serves as a new source of light.If the viewing screen is infinitely far away, as in the case of light coming from a star, or if a lens is placed after the slit to focus parallel rays onto the screen, the diffraction pattern is called Fraunhofer diffraction, named after the German optician and physicist Joseph von Fraunhofer (1787-1826). If the viewing screen is close and no lenses are used, as in a laboratory, the diffraction pattern is called Fresnel diffraction, named after the French physicist Augustin Fresnel (1788-1827), who presented to the French Academy in 1819 a paper on the theory of light that explained interference and diffraction effects.
In a single slit experiment, if the single slit is wide enough, it is the source of more than one Huygens’ wavelet, according to Huygens’ principle. Each wavelet originating from the slit propagates at a different angle. Thus the wave front incident on the slit of width W is divided up into a series of narrow strips, each of which produces a wavelet (or ray) (Fig. 1).
single slit experiment
Fig.1. Single slit experiment. The brightest fringe is in the center of the diffraction pattern.
First, consider the case of light rays propagating straight through the slit, i.e., where the angle θ made by incident light to the normal is θ = 0˚, and the path difference between rays is zero. Wavelets in the center of the slit travel about the same distance with the same phase, arrive at the same mid-point on the screen, and augment (constructively interfere with) one another, making the center of the diffraction pattern highest in intensity and brightest of all.
Fig. 2. Wavelets radiate from each half of the slit.
Wavelets from each half of the slit radiate at the angle θ, in equidistant pairs on either side of the slit’s center (Fig. 2). When the angle θ increases until the path difference between rays in a pair reaches half the slit’s width, the angle θ reaches (W/2) sin θ = λ / 2, and the pairs experience destructive interference at zero intensity, giving the first dark fringe. Thus the first minimum (dark fringe) occurs at the angle defined by W sin θ = λ. We can generalize the formula above to find the angles θ at which minima are found, as
W sin θ = m λ, m = ±1, ±2, ±3,…
where m is the path difference of the rays on either side of the center of the slit. The sign ± for the values of m emphasizes the symmetry of the diffraction pattern around its center. Thus at the center where the path difference is zero, the angle θ is zero, and there is no minimum. The first minimum is found when the path difference between corresponding waves is equal to the entire width of the slit, the second minimum when the path difference is half its width, and so on. Thus, the spread of the diffraction pattern is proportional to the wavelength, and inversely proportional to the width of the slit. The particle theory would have the spread to be narrower as the slit becomes narrower, which is contrary to observation.
In a double slit experiment, as described by Thomas Young (Fig. 3), the first screen has one slit through which only a small beam of light passes to prevent too large a smear on the third and distant viewing screen. The second screen has two closely spaced slits separated by a distance d.
double slit experiment
Fig. 3. Young’s double slit experiment
After leaving the first screen the beams emerging from the two slits of the second screen propagate in all forward directions, by Huygens’ principle (Fig. 4), and interfere with one another in the same way as ripples from two sources of water waves interfere with one another.
Huygens’ principle
Fig. 4. Huygens’ principle and path difference
On the third screen, the light is also spread out over a large area and forms a diffraction pattern of bright and dark fringes, which resulted from constructive and destructive interference. The central bright fringe is midway between the two slits. Light coming from the slits are therefore of the same path lengths and propagate in phase, causing constructive interference. The next bright fringe occurs when the difference in path length of the two light waves is equal to one wavelength λ of light. From this we can formulate the following generalization as a condition for bright fringes in a double slit experiment (Figs. 4 and 5):
d sin θ = m λ, m = 0, ±1, ±2, ±3,…
double slit interference pattern
Fig. 5. The double slit interference pattern. Dark fringes are represented by dark blocks.
Note the similarity between this relation and the one that obtains for the diffraction pattern in the single slit experiment above. The bright fringe closest to the midpoint of the two slits occurs at the angle θ when the path difference d sin θ = λ or sin θ = λ / d. The above condition says that there is a bright fringe when m = 0, ±1, ±2, ±3,… The central bright fringe occurs at m = 0. The first maximum above the central fringe occurs at m = +1.
The conditions for the dark fringes are d sin θ = (m - ½) λ, when m = 0, ±1, ±2, ±3,… Thus, the first dark fringe (minimum) is found at d sin θ = λ / 2 where m = +1 above the central point. Likewise, m = -1 is below the central point. The next minimum is at d sin θ = 3λ / 2 where m = +2.
These experiments show that light is a wave phenomenon.
Light is a Quantum Phenomenon. The quantum mechanical explanation of the double slit experiment also makes use of wave properties such as wavelength, frequency, and amplitude. Consider that the experiment is conducted for light, where photons are involved, or for matter waves, where electrons are involved. From the previous discussion, which showed the wave nature of light, we obtained the diffraction pattern of light through the slits.
Now if reduce the intensity of the light to the point of allowing only a single photon to pass through the slits at a time, we will see the photons landing at random on the distant screen. As more electrons are passed through the slits, an interference pattern emerges that is similar to the one obtained with intense light. And the same behavior is observed with matter electrons as well. It is as if each photon or electron interferes with itself.
For matter particles such as electrons, quantum mechanics relates the wavelength to momentum by the formula, λ = h / px. Since the momentum of an electron px = mv, the de Broglie wavelength formula becomes λ = h / mv. That electrons behave in wavelike manner is uncontroversial. This fact forms the basis for the electron microscope, which takes advantage of the higher magnifying power of electron beams, i.e., greater energy, made possible by their great frequency, or their short wavelength.
In quantum mechanics the amplitude of a matter wave is called the wave function ψ (the Greek letter psi. See more in Section 4.3 below.), defined as a function of time and position in a field called a matter field or a matter wave. The wave function is the amplitude of a matter wave at any point in space and time, just as the electric field E represents the amplitude of an electromagnetic field. As the intensity I of any wave, including light wave, is proportional to the square of its amplitude or the square of its electric field E2, the intensity of a light beam, from the particle point of view, is proportional to the number of photons N that pass through an area. From this it follows that the number of photons is proportional to the square of the electric field.If the light beam is very weak only a few photons pass through. In an intense light beam, a large number of photons may be found. This allows for a probability interpretation: That the square of the electric field E2 is a measure of the probability that a photon is at that location. The higher E2 is, the greater is the probability of finding the photon there.
Similarly, the matter wave lends itself to the probabilistic view. The wave function ψ varies in magnitude from point to point in space and time. If ψ describes electrons then ψ2 at a given point is proportional to the number of electrons expected to be found at that point. In other words, if ψ for a single electron depends on space and time, which cannot be predicted, ψ2 is interpreted as a measure of probability that an electron is found at that time and position.
Let us consider the double slit situation again. In the case of light, as described by Young above, the interference pattern appears on the screen behind the slits, and may be seen or recorded on film. In the case of electrons, the interference pattern may be seen on a fluorescent screen.
If we reduce the flow of electrons (or photons) to allow only one electron to pass through the slits at a time, we will first observe the random, unpredictable impact of the electrons on the screen. As the experiment continues for a long time, an interference pattern begins to emerge as a large number of electrons hit the screen, just as predicted by the wave theory. Though there is no way to predict where a given electron will strike, it is possible to predict the probability, which as seen above, is represented by ψ2. A zero ψ results in minimum interference while a maximum ψ corresponds to peak interference. Since the electron passes through the slit one at a time, it is clear that the interference pattern was not caused by the interaction of one electron with another. Rather, it seems like the single electron passes through the two slits at the same time, something a particle cannot do. But if we think of an electron as a wave, it can certainly cross the slits simultaneously. Thus, while the electron may travel as a wave, it hits the screen as a particle!
If we block the first slit, forcing the electron to pass through the second one, we see no interference pattern. If we now block the second slit to force the electron through the first slit, no interference occurs either. Instead we see two bright areas on the screen behind each slit. This is confirmation of the observation that each electron passes through both slits at the same time. In short, we can treat an electron as a wave; then ψ represents the amplitude, or we can treat it as a particle, and ψ2 is the probability of finding the electron at a given point.
This illustrates the wave-particle dual nature of matter particles.
David Deutsch’s Four- Slit Experiment
David Deutsch started out with the standard Young double slit experiment described above, using a red laser for precision. The two slits are one-fifth of a millimeter apart in the first screen, and the second screen is three meters away. He recorded the interference pattern on this screen (Fig.6). If light traveled in a straight line, the pattern on the screen would be a pair of parallel bright bands one-fifth of a millimeter apart with sharp edges, although they would be hard to see as separate bands. Instead the pattern consists of alternating bright bands and dark bands with no sharp edges, indicating that light bends.
two slit interference pattern
Fig. 6. Two-Slit Interference Pattern
If now a second pair of identical parallel slits is cut in the first screen, the interference pattern behaves very differently. The bright bands are fewer in number (Fig.7-a) with four slits than with two slits (Fig.7-b). In this pattern there are points, e.g., point X in Fig. 7, that are dark on the four-slit pattern that were bright on the two-slit pattern. What came through the second pair of slits that prevented the light from reaching those points?
compare interference patterns
Fig. 7. Comparison of Two-Slit and Four-Slit Interference Patterns
Deutsch observes that the four-slit pattern appears only when all four slits are open, and the two-slit pattern appears when two slits are open. If three slits are open, then a three-slit pattern appears, which is different from the other two. This means whatever causes the interference is in the light beam. The two-slit pattern results when two out of four slits are covered by something opaque, but not by something transparent. Essentially the interfering entity is obstructed by opaque objects, even by fog, but not but transparent objects, though they may be otherwise as impenetrable as diamond. In short, it behaves like light. Therefore, the interference of photons from each slit striking the screen is caused by photons from the other slits.
Now consider the case in which the light source is moved so far away that only one photon per day falls on the screen. Does the interference become less pronounced when photons become sparser? Do we still observe the point X being dark when four slits are open, and light when two slits are open? The answer is a definite yes. It is as if the one photon that strikes the screen interferes with itself!
Could it be that the photon splits up into fragments when entering the apparatus, which then change course and recombine afterwards before arriving at the screen? To find out, we install a detector at each of the four slits, then fire one photon. At most only one detector registers the passing of the photon. Since at no time did we observe two detectors going off at once, we can say that the photon never splits up.
Going over our observations once again, we note that (1) when one photon passes through the apparatus, it passes through one slit; (2) something interferes with the photon and deflects it depending on what other slits are open; (3) the interfering entities pass through the other slits; (4) the interfering entities behave exactly like photons; and finally, (5) the interfering entities cannot be seen. Let us call these entities photons because they are in effect photons.
It seems that photons come in two varieties, the tangible (visible) photons and the shadow (invisible) photons, which are detectable only through their interference effects on the tangible ones. This line of thought is pursued further in Chapter 5, The Multiverse.
4.2 The Photoelectric Effect
We will see how the electromagnetic wave theory of light fails to explain a phenomenon called the photoelectric effect, produced when high-frequency light strikes a metal surface and causes emission of photoelectrons from the plate.
In 1902 the German physicist Philipp Lenard (1862-1947) published the results of the studies conducted with light to show the photoelectric effect (Fig. 8). In these experiments when a high-frequency ultraviolet beam of light strikes a polished irradiated zinc plate E, called the emitter, contained in one end of an evacuated quartz tube, at the other end of which a zinc cathode C (called the collector) is externally connected to the metal plate by passing through an ammeter, some of the electrons (later called photoelectrons) from the negative metal plate E were emitted with sufficient energy to reach the positive collector cathode. This emission forms a current that can be measured by the ammeter. The apparatus is also connected to a variable voltage source that can reverse the polarity of the emitter plate and the collector plate. High-frequency ultraviolet light is necessary to eject electrons from the metal plate. Later experiments show that all substances exhibit photoemission. Now we reverse the terminals, making the metal emitter plate E positive and the collector C negative. The electrons emitted from the metal plate E will be repelled by the negative collector. But if the reverse voltage is small enough, the fastest electrons will still reach the collector C, and there will be a current in the circuit.
photoelectric experiment
Fig.8. Diagram of photoelectric experiment
As the reverse current potential is increased, fewer electrons reach the collector cathode, and the current drops until finally no more electrons strike the collector, and the current stops at the point where the potential reaches or exceeds a level of a few volts, called the extinction voltage or stopping potential. This extinction voltage is the same for all intensities of light of the same frequency, but is dependent on the frequency of the light. When the reverse potential reaches its stopping potential, the photoelectric current is the same for a given light intensity regardless of frequency.
One puzzling behavior of the energy level of electron emission cannot be accounted for by the electromagnetic wave theory of light. Photoelectron emission starts instantly (less than 10-9 second after illumination) when light strikes the metal plate. In an experiment with a sodium surface assumed to be one atom thick, and given that each photoelectron emitted has accumulated about one eV (electron volt) of energy, it would take the incident light 2 weeks to build up to that energy level for each electron. If we then added several electron volts necessary to detach each photoelectron from the sodium surface, it would take nearly 2 months. The electromagnetic theory thus fails to account for the instantaneous emission of electrons.
Another difficulty with the electromagnetic wave theory is that photoelectron energy depends on the frequency of the light used, not on its intensity. An intense light beam yields more photoelectrons, but the kinetic energies of the emitted electrons remain unaffected by light intensity. Below a certain frequency specific to each particular metal, called cutoff or threshold frequency, photoelectron emission ceases completely. When the incident light increases in frequency, the electron energy rises to a maximum level. As the light’s frequency keeps increasing, this maximum energy increases proportionately. A higher frequency of incident light causes a higher maximum photoelectric energy. Thus a faint blue light, with a higher frequency, produces greater photoelectric energy than a strong red light, which produces more electrons but of lower energy because of its lower frequency. The electromagnetic theory cannot account for the effect of the incident light’s frequency on the amount of photoelectric energy produced.
Albert Einstein explained the photoelectric effect as follows. When the stream of photons of light strikes an emitting surface, the photons are absorbed by the emitter. The quantum energy of each photon absorbed is the energy quantity E = hf (Section 3.5 above) transferred to a single electron in the emitter surface. When the acquired energy is sufficiently large, the electron that penetrates the surface gives up an amount of energy φ, called the work function of the substance. As light continues its energy transfer to the electron, hf will overcome the work function φ, and the electron has enough kinetic energy to escape from the surface. Such an electron is now called a photoelectron. Since h is the Planck’s constant and the work function of each substance is also a constant, the maximum kinetic energy that the photon imparts on the photoelectron KEmax is the difference between hf and φ: KEmax = hf – φ. It follows from this equation that the kinetic energy of a photoelectron is directly proportional to the light’s frequency f.
The work function φ, measured in eV (electron volts), varies from substance to substance. Cesium (Cs) has a work function of 1.96; potassium (K), 2.24; sodium (Na), 2.28; calcium (Ca), 2.71; copper (Cu), 4.7; silver (Ag), 4.73; platinum (Pt), 6.35. Clearly, for photoemission to take place, light impinging on the substance has to have a high frequency, such as ultraviolet rays, in order to possess enough quantum energy hf to overcome the work function φ. Thus below a certain frequency, called cutoff or threshold frequency, photoemission ceases regardless of the light’s intensity. This is the frequency at which the quantum energy is equal to zero. Another fact is that photons have enough energy to eject electrons instantly from their substance’s surface without a “soaking up” period, as required by the wave theory.
The assumption that the transfer of energy is a discontinuous or indivisible process that takes place in bursts of size ΔE = hf is consistent with all the experiments dealing with this type of phenomenon. When this assumption is combined with the experimental observation that a quantum is an indivisible unit of energy, the photoelectric effect cannot be regarded as a gradual transfer process in which energy is exchanged in a continuous fashion. The photoelectric effect definitively proves the particle nature of light.
The transfer of a quantum is one the basic processes of nature, which cannot be explained in terms of other processes, and may thus be called an elementary process, in the same way as an electron is called an elementary particle.
In sum, we have seen that light behaves like waves under certain circumstances and like particles under others. We now know that radiant energy is transported as photons guided by a wave field along their path. This wave-particle dual character of light is now recognized in modern physics.
X Ray Diffraction. The photoelectric effect describes the transfer of energy from photons of light to electrons of matter. We now examine the reverse process, in which all or part of the kinetic energy of moving electrons is converted into photons.
In an experiment conducted by Wilhelm Roentgen in 1895 he found that an unknown and highly penetrating radiation is produced when fast electrons impinge on matter. This radiation, soon to be named X rays, propagates in straight lines through magnetic and electric fields and opaque materials to cause phosphorescent substances to glow, and to expose photographic plates. The faster the electrons move, the more penetrating the resulting X rays are; and the greater the number of electrons, the greater the intensity of the X rays. Shortly after their discovery, X rays are suspected to be electromagnetic waves since electromagnetic theory predicts that an accelerated electric charge produces electromagnetic waves, and a rapidly traveling electron suddenly brought to rest is certainly accelerated. This kind of radiation is given the German name Bremsstrahlung (“braking radiation”). In the early experiments X rays exhibit no diffraction because at very short wavelengths (below the ultraviolet range) the refractive index decreases to unity (implying straight-line propagation).
Later experiments with polarization (a wave phenomenon) conducted by the English physicist Charles Glover Barkla (1877-1944) in 1906 definitively established the wave nature of X rays. Barkla designed an experiment in which a beam of unpolarized X rays impinges on a block of carbon, which scatters them. Assuming that X rays are electromagnetic waves, this means that the carbon electrons are vibrated by the electric vectors of the X rays and reradiate. Since an electric vector E (Section 3.4 above) of an electromagnetic wave is perpendicular to its direction of motion, the polarization plane is perpendicular to this direction, and the scattered X ray is plane-polarized. Picture this experiment graphically.Imagine three directions, y being vertical, and x and z being horizontal, originating from one point where the first block of carbon is placed. The initial X ray beam coming from the –z direction toward the carbon scatterer has an electric vector that lies in the xy plane only. Therefore the target carbon electrons are induced to vibrate in the xyplane. To demonstrate polarization, a scattered X ray that moves in the +x direction from the point of origin can have an electric vector in the y direction only. Now place a second block of carbon to the right of the first and in the path of this polarized X ray. The electrons of this carbon block is restricted to vibrate in the y direction, and therefore reradiate X rays that propagate in the xz plane only, and not in the y direction. All this polarization is in accord with electromagnetic theory, thus demonstrating that X rays are electromagnetic waves.
To further reinforce the finding, experimentation with X ray diffraction was conducted in 1913 by the German physicist Max von Laue (1879-1960) using crystals because the spacings of crystal atoms are about the same order of magnitude as the hypothesized X rays’ wavelengths. A monochromatic beam of X rays that strikes a crystal is scattered in all directions within it. Owing to the regular arrangement of the crystal atoms, in some directions the scattered waves constructively interfere with one another while in other directions they destructively interfere. During these studies, the wavelengths of X rays were found to be 1.3 x 10-11 to 4.8 x 10-11 m, with boundaries overlapping gamma rays in the shorter-wavelength end, and overlapping ultraviolet rays in the longer-wavelength range. The success of the experiment further reinforces the nature of X rays as electromagnetic waves.
From the photoelectric effect and X ray diffraction, the wave-particle duality of light is amply demonstrated. By the photoelectric effect the quantum theory predicts correctly that the maximum photoelectron energy depends on the frequency of the incident light and not on its intensity, as the wave theory suggests. It also explains why the least energetic light can induce immediate emission of photoelectrons, whereas the wave theory would require a “soaking up” period. The quantum theory allows for a threshold frequency below which no emission takes place, a fact for which the wave theory offers no explanation.
On the other hand, the wave theory is strikingly successful in explaining wave phenomena of reflection, refraction, diffraction, interference, polarization and superposition among all kinds of electromagnetic waves ranging from visible light to X rays. Quantum theory offers little explanation in these areas.
The photoelectric effect and the double slit experiment show that both light and matter share the same wave-particle dual character. We conclude that the wave-particle duality of all matter is just the way nature is.
4.3 The Heisenberg Uncertainty Principle
The quantum theory allows for the wave-particle duality nature of light. Light energy is transferred in discrete quantities called quanta. Extending this idea to atoms, the Danish physicist Niels Bohr (1885-1962) proposed a model of a hydrogen atom in which the atom is made of a nucleus with an electron circling it in an orbit whose size and shape are governed by quantum theory. In this model an electron can move to a lower orbit only if it loses the amount of energy equal to the energy difference between the initial and the final orbit. Conversely an electron can move into a higher orbit only if it gains sufficient energy. Otherwise the electron must stay in its orbit. The Bohr model is successful in explaining the spectral lines of elements with low atomic numbers, but breaks down in the case of helium atoms and atoms with higher atomic numbers.
In 1927, the German physicist Werner Heisenberg (1901-1976) in his study of the electron thought that a particle has essential characteristics that include a definite position in space and a well-defined velocity at a given instant. He tried to determine the position and velocity of an electron experimentally, and came to the conclusion that it is not possible to measure precisely both its position and its velocity at the same time. He found some uncertainty in the measured value of the position or velocity or both, regardless of measuring apparatus.
One way of fixing the position of a particle is by using Cartesian coordinates x, y, and z, where the x and y axes are on a horizontal plane, and the z axis is on a vertical plane. Consider a single slit experiment conducted in an attempt to find the exact position of an electron. We pass a single electron through a narrow slit of width w of a screen. The electron beam moves parallel to the z axis, with the long side of the slit being parallel to the x axis, to illuminate a second screen located some distance L away from the first screen. The distance L is vastly greater than the width w of the slit. Inside the slit, every point lies between coordinates x1 and x2, such that x2 - x1 = w. When an electron passes through the slit, we know that its x coordinates must be between x1 and x2 , but we cannot determine the exact position of the electron in the slit. There is therefore an uncertainty of Δx, which is defined by Δx = x2 - x1. This uncertainty is the width w of the slit itself. We can narrow the slit to an arbitrarily minuscule size, but the uncertainty still remains.
Now consider the velocity and momentum of the electron as it emerges from the slit. It is diffracted in an arbitrary and unpredictable direction, and acquires an indeterminate velocity and momentum in the x direction. (See Section 2.2.2 above for a discussion of diffraction.). Thus the process of determining the x coordinate necessarily entails an indeterminate velocity and momentum. In a diffraction pattern the central portion of the pattern is a broad brightly illuminated fringe with center N flanked by dark fringes, which in turn are flanked by bright fringes and so on, with bright and dark fringes alternating as they spread outward. The center of the first dark fringe adjacent to the central bright fringe is called D1.
Let us focus on momentum. Most electrons fall inside the broad central fringe. The electron that is diffracted directly to the center of the central fringe will have the momentum pz equal to the ratio of the Planck’s constant h and the particle’s de Broglie wavelength λ,
pz = h / λ.
In terms of velocity and mass, momentum is
pz = mv
where m is the particle’s mass and v its velocity. But there is a small probability that an electron will be diffracted at a large angle to near D1, the center of the adjacent dark fringe, and thus acquire a large extra component of momentum pD in the x direction. For this electron, the electron’s momentum p would be the vector sum
p = pz + pD.
The quantity pD is the uncertainty in the x component of momentum. Knowing that the distance y1 = ND1 between the center of the central fringe N to the center of the adjacent dark fringe D1 is equal to ND1 = λL / w (w being the width of the slit), that the initial momentum of the electron pz = h / λ, and that the ratio of the two momentum vectors pD / pz is equal to the ratio ND1 / L, we can derive
pD w = h.
pD is known to be the uncertainty of the x component of momentum, also denoted Δpx., w is the uncertainty of the x coordinate Δx. Hence this last equation, known as the Heisenberg’s uncertainty principle, becomes:
Δpx. Δx ≈ h
The symbol ≈ means “approximately.” Thus the product of the error in momentum and position approach Planck’s constant, a very small fundamental constant equal to 6.629 x 10-34 J·s, which is the limit of uncertainty. From this it is clear that uncertainty affects only the microscopic world of the atom and particles. At the macroscopic everyday world we are not at all affected by this uncertainty.
To summarize, the Heisenberg’s uncertainty principle states that it is impossible to determine simultaneously the position and the momentum of a body.
Analogous uncertainties also exist for the y and z coordinates, Δy and Δz, and their corresponding y and z components of momentum, Δpy and Δpz. We thus have the following uncertainties for the y and z coordinates:
Δpy. Δy ≈ h
Δpz. Δz ≈ h
Now let us consider velocity.If the velocity of the electron is small compared with the speed of light c and its rest mass is m0, then the uncertainty of momentum on the x coordinate is
Δpx = m0 Δvx
where Δvx is the uncertainty in the x component of its velocity, so that the uncertainty principle may be written as Δvx.Δx ≈ h / m0 at a small speed. The equation will not apply if the electron’s speed approaches the speed of light.
From the uncertainty of velocity just seen we derive the x component of its velocity as
Δvx ≈ h / m0 Δx.
This relation shows that the velocity is inversely proportional to the width of the slit. Suppose we want to pinpoint the position of the electron by narrowing the slit Δx to a very small size. Then the electron’s velocity Δvx becomes very large, and causes a broader diffraction pattern. Thus an increase in the precision of determining the position of the electron on the x coordinate entails the greater uncertainty in the x component of velocity, and hence of momentum. Conversely, the uncertainty in velocity can be reduced only if we increase the width of the slit, thereby causing greater uncertainty in the x coordinate.
The uncertainty principle just seen can also be applied to energy and time. We measure time by means of a particle moving at a given velocity v that has been measured. We must know when the particle has covered the distance x = vt from its accurately known original position. The uncertainty in time is then given by Δt = Δx / v. Since the particle has an uncertainty in position Δx ≈ λ, the time when a particle is at a given position is Δt = λ / v. Now the electromagnetic wave packet that passes through the hole takes Δt time to pass by any point. This wave packet can be decomposed into components with a range of wavelengths and a range of frequencies Δν (the Greek letter nu). Since the particle (a wave packet acts like a particle) can transfer its energy E = hf or hv / λ, its range of energy is ΔE = hv / λ. Hence the product of ΔE and Δt, or the uncertainty principle for energy and time, can be written as:
ΔE Δt ≈ h
Heisenberg arrived at the uncertainty principle with more careful calculations. First, since the uncertainty principle assumes the use of the de Broglie wave group or wave packet, we examine some of the related concepts. A wave packet consists of a group of waves of slightly varying wavelengths, with phases and amplitudes such that they interfere constructively over a small region of space, and quickly interfere destructively outside of which to produce an amplitude of zero. The electric field of an electromagnetic wave is an example of a wave packet.
The concepts of angular frequency ω and of wave number k to describe waves and wave groups are pertinent. When a particle is in uniform circular motion around a circle ν (the Greek letter nu, not the Roman letter v) times per second (i.e., the frequency of a de Broglie wave), it sweeps out 2πν radians / s. Given that the circumference of a circle is times the radius, one revolution sweeping 360˚ is equal to 2π, and one radian is 360˚ / 2π = 57.3˚. If the particle has a period T during which it makes one complete revolution, this is expressed in the following relation, which obtains for the particle’s angular frequency, i.e., its frequency as it orbits its circular trajectory: ωT = 2π, or ω = 2π / T. And since time is inversely proportional to frequency, T = 1 / ν, the angular frequency of a particle is ω = 2πν, where ν (nu) is the frequency, expressed in radians per second.
The wave number k = 2π / λ is the number of waves per unit distance of radiant energy of a given wavelength, equal to the number of radians corresponding to a wave train of 1 m long. As can be seen, the wave number is a reciprocal of wavelength. We represent the limit of the wave number spread of waves with appreciable amplitudes as Δk. We estimate the product Δx Δk to be Δx Δk ≥ ½. The wave number corresponding to the de Broglie wavelength is
k = 2πpx / h
The uncertainty Δk in the wave number of the de Broglie waves associated with the electron results in an uncertainty Δpx in its momentum given by the formula:
Δpx = h Δk / 2π
Since Δx Δk ≥ ½, Δk ≥ 1 / (2Δx), and the Heisenberg uncertainty principle is expressed by:
Δpx. Δx ≥ h / 4π
Heisenberg’s uncertainty principle says that the product of the uncertainty Δx in the position of a body at a given instant and the uncertainty Δpx in its momentum in the x direction is equal to or greater than h / 2π (which is now the accepted value). Since this last quantity is the basic unit of angular momentum, abbreviated as ħ, and is equal to 1.054 x 10-34 J • s, the uncertainty principle can be written:
Δpx. Δx ≥ h /2π or Δpx. Δx ≥ ħ.
Wave Function. With matter waves a number of new questions arise. How do matter waves behave if electrons, which are particles, are described by matter waves? What determines the value of a matter wave at a given location? What is the significance of a matter wave having a larger value in one location and a smaller value at another? The German physicists Erwin Schrödinger (1887-1961), Max Born (1882-1970) and others have an answer. One of the fundamental equations of physics, ranking in importance with Newton’s laws of motion, and Maxwell’s equations of electromagnetism, is the equation for quantum mechanics formulated by Schrödinger. Schrödinger’s equation provides the solution to the wave function, denoted by ψ (the Greek letter psi). Born developed an interpretation of the wave function differently for matter waves than for mechanical waves. For an electron of an atom, the wave function ψ of an electron wave represents not a wave traveling through space and transferring energy from place to place, but a standing wave, like that of a plucked string whose two ends, called nodes, do not vibrate. Between the nodes all points of the string oscillate with varying amplitudes, and the middle part (antinode) has the highest amplitude. In this case the wave function represents the amplitude of the electron wave, where each point that oscillates may be found, as a function of time and position. To avoid the possibility of finding negative amplitude, we use the square ψ2 of the value of the wave function. The wave function may vary in time and space from point to point. In matter waves, if the wave function ψ describes a collection of electrons, ψ2 at any point is proportional to the number of electrons expected to be found at that point. We call it probability density. The larger ψ2 is, the stronger is the probability of finding the electrons there. Thus the wave function indicates either amplitude of an oscillating wave, or probability of finding particles in a diffraction pattern.
To extend the concept of probability further, consider a single electron passing through a slit. Imagine having a series of contiguous counters in lieu of a screen, to register the diffraction pattern. Or if you want, you can have a light-sensitive film-emulsion screen that glows every time an electron hits it. But then you will have to keep track of the number of hits. Each time the electron strikes a counter, the number is advanced by one. Being a particle an electron can only be in one place at a time. There is no way to predict or control where and when the electron hits the counters. If we repeat the experiment a large number of times, a diffraction pattern will emerge in which the counter at the central fringe receives the most electrons while some counters do not receive any at all. The fringes farther away from the central fringe register a decreasing number of strikes. The number of electrons registered by the counter is proportional to the wave function ψ2 at that point at a given time. Hence, we interpret ψ2 (also called the squared modulus of ψ) as the probability of finding an electron at a given position and time.
Recall the two-slit experiment. We now try to discover the particle-like behavior and the wave-like behavior of an electron (or a photon), i.e., to what extent the particle goes through one slit at a time and to what extent it goes through both slits together. The electrons (having the same initial momentum and the same wave function) are sent one by one through the slits to the detecting screen. We illuminate the region of the slits with a bright light that has a wavelength no greater than the distance between the slits to insure at least one quantum is scattered at the passing of each electron. Under the microscope as an observing apparatus, the light quanta deliver a momentum that is uncertain. Bohm (pp. 118ff.) reasons that this uncertainty tends to destroy the interference pattern. That is if we make the measurement precise enough to define through which slit the electron passes, the interference pattern disappears. Conversely if we restore the interference pattern by using a longer wavelength, then the measurement would not be precise enough to determine which slit each electron has passed through. In other words, we can observe the particle nature of the electron (by determining which slit it has gone through), or the interference pattern (a wave-like phenomenon), but we cannot observe both precisely at the same time.
Before the observation the wave function covered both slits simultaneously to cause the interference pattern. After the observation the electron is near one of the slits as a wave packet. The process of observation of the electron’s position caused the wave function to collapse from a broad front to a narrow region. The wave function before the observation can only give the probability of collapse to a given region.
The wave function plays such a central role in quantum mechanics that we will see it figure prominently in the remainder of this work.
4.4 Measurements – Probability Versus Determinism
The following is an exposition of the standard quantum mechanics (QM) interpretation known as the Copenhagen interpretation. Though widely accepted it is not without competition, for QM has opened a Pandora’s box that even after eighty years since its inception still causes a continuing debate.
Concerning the uncertainty principle the question is whether the uncertainty is caused by the limitations of our measurements and measuring instruments, or whether it comes from the very structure of matter itself. Additionally, the dichotomy between classical physics and quantum physics gives rise to issues regarding probability and determinism.
In classical physics, when a position and velocity of a particle are known, its behavior is determined for all time, past and future, by Newton’s laws of motion. In this classical view of complete determinism, the idea of forces as causes of events becomes unnecessary. As Bohm (p. 151) says, “we can no more say that the future is caused by the past than we can say that the past is caused by the future. Instead, we say that the motion of all particles in spacetime is prescribed by a set of rules, i.e., the differential equations of motion, which involve only these spacetime motions alone. The spacetime order of events is, therefore, determined for all time…”
Quantum theory rejects the classical view. Newton’s law of motion cannot apply to an electron because its momentum and position cannot be determined accurately at the same time, according to the uncertainty principle. Thus the concept of complete determinism fails to apply to an electron. However, although there are no deterministic laws in quantum theory, there are statistical laws. If an electron whose momentum is appropriately controlled is aimed at a given point, it hits close to that point, and over a number of experiments will form a pattern. In other words, there is a statistical trend for the electron to strike near that point. However, it is not possible to predict precisely where the electron will strike.
Quantum properties of matter are only potentialities, as Bohm (pp. 132ff.) puts it well. Take an electron. It has the potentialities of demonstrating either wave-like properties or particle-like properties depending on the material system with which it interacts. Consider an electron with a wave packet of definite momentum, and thus of definite wavelength. Such an electron behaves as a wave when it interacts with an appropriate measuring device such as a metal crystal, and its particle-like properties become less pronounced. When this electron interacts with a position-measuring device, such as a microscope, its particle properties are manifested at the expense of wave-like properties. Hence, it is the nature of matter that its properties are potentialities capable of continual and random transformation between wave-like and particle-like properties.
There are no such things at the quantum level as “intrinsic” properties. A particle (e.g., an electron or photon) is neither wave nor particle before its interaction with a material system (whether it be an observer, a measuring device, or the universe). Its properties are not completely deterministically defined before the interaction, as classical physics maintains, but instead exist only statistically, i.e., the properties are incompletely defined potentialities. Whichever set of properties eventually develops depends on the particle itself and the material system with which it interacts.
It has been the assumption of science (in the classical view) that the world can be analyzed into distinct parts with intrinsic properties that work together according to causal laws to form the whole. However, quantum theory holds that no intrinsic properties can be defined except in interaction with other parts (material systems), and that different interactions result in different properties being manifested. Bohm (pp. 138ff.) concludes that it is necessary to reject the classical assumption that the world can be correctly analyzed into parts in favor of the assumption that the entire universe is a single indivisible unit.
This latter assumption raises a serious question. If all “parts” of the world are to be described (Bohm: pp. 584ff.), a paradox arises: As an inseparable part of the universe, the observer has to be somehow distinct from the object in the universe he wishes to describe, which is an impossibility in the view of the universe as a single indivisible unit. However, if we assume that the final stages of the observation are classically describable, the paradox vanishes. The observer can be distinct from the measuring apparatus and the object under measurement if he ignores the effects of quantum links between him and the rest of the world as infinitesimally small. In this way, we can regard the relation between the observer and the observing apparatus as one between two distinct systems interacting only according to the laws of classical physics. Furthermore, any other observer can interact with the measuring apparatus without altering its properties. Any stage of observation eventually becomes classically describable, and may be regarded as a point of separation between the observer and what he observes. For example, when an observer obtains information for a photograph, the system investigated may consist of the objects being photographed, the camera and the light. The observer obtains information from the photographic plate, which is separate and distinct from the observer. Or we can consider the camera and the plate as part of the object. Or pushing back a little further, we can say that the observer observes the image on the retina of his eye, in which case the rest of the world, including the retina, the camera, the light and the plate, is the system under investigation. However, we cannot push the distinction between observer and observed too far, since we still know so little about the human brain. It is more appropriate to consider the brain as a functional unit without going further into the minutiae of its functioning.
As it turns out, some scientists were disturbed by this probabilistic quantum mechanical view of nature. As Einstein said, “God does not play dice with the universe.” In a 1935 article which appeared in Physical Review, Einstein, and two young collaborators Boris Podolsky and Nathan Rosen (EPR) advanced a paradox on the basis of a thought experiment, which was supposed to cast serious doubt on the widely accepted interpretation of the quantum theory (Bohm: pp. 611ff.). They described a complete physical theory as requiring that “every element of physical reality must have a counterpart in a complete physical theory." EPR argued that “If, without in any way disturbing the system, we can predict with certainty the value of a physical quantity, then there exists an element of reality corresponding to this physical quantity.”
EPR tried to show that the wave function (the heart of quantum mechanics) could not possibly contain a complete description of all elements of reality in a system. Their hypothetical experiment involved a molecule containing two atoms in such a state that their total spin is zero, and that the spin of one atom is pointed in a direction exactly opposite to that of the other. (See Section 4.5 below for more information on spin.) They then supposed that there was a process that separated the two atoms without changing their angular momentum, and the atoms no longer interacted, but remained correlated, with each other. Because of correlations, it would be possible to measure indirectly the precise value of the spin angular momentum of any of the two atoms by measuring this value in the other particle without disturbing the first. Thus all three components of this spin must correspond to elements of reality by an EPR assumption, even before any measurement takes place. We assume in quantum mechanics that the wave function contains all relevant physical information about a system. However, the wave function can specify only one component of the spin of any atom without disturbing it. Therefore, the wave function provides an incomplete description of the elements of physical reality.
Bohm (pp. 620ff.) counters that quantum theory has a different assumption about physical reality at the microscopic level. Unlike classical physics, which assumes that the world can be analyzed into distinct and separate parts (elements of physical reality) that can be precisely measured, i.e., they correspond to precise mathematical quantities, quantum theory regards reality as an indivisible unit whose properties exist as potentialities at the microscopic (atomic and subatomic) level. There is no one-to-one mapping of properties and mathematical equations as is assumed in classical physics. Instead at the quantum level, the wave function provides for a complete mathematical description of any system only in the sense of statistical correspondence. To understand how the wave function works, we hypothesize that the properties of a system exist only as incompletely defined potentialities, which are then realized when they interact with a classical system, such as a measurement apparatus. Take the example of position and momentum of an electron. According to the uncertainty principle, these complementary properties of a particle cannot be simultaneously specified to any arbitrary degree of precision. Yet either one has the potential of being more clearly defined when it comes in contact with an appropriate measuring apparatus, but only at the expense of the definition of the other. Neither of the properties may be called intrinsic properties of the electron since the realization of either potentiality depends as much on the electron as on the system with which it interacts. Thus EPR has not shown that a paradox exists, nor that quantum mechanics is an incomplete description of reality.
At this point one may ask, “Which theory is a good description of the nature of matter, classical physics or quantum theory?” The answer is both. Bohm (pp. 624-8) clearly explains that quantum theory cannot work without the classical paradigm. Each has its strengths and areas in which it shines. Classical mechanics, which includes the theory of relativity, is very successful in dealing with large-scale (macroscopic) systems, such as molecules, bacteria, people, the earth, the stars, the universe, and one of the four forces of nature that governs them, gravity. Quantum mechanics is peerless in describing small-scale elements, such as atoms, protons, electrons, photons, quarks, a host of other elementary particles and the remaining three forces of nature, the electromagnetic force, the weak nuclear force (combined as electroweak force) and the strong nuclear force, which operate at this level. Quantum theory presupposes the correctness of classical physics since among the potentialities of variables, the final stage of a measurement is a classical event, and the wave function collapses into one potentiality that can now be classically described with precision.
This state of affairs is messy. How can nature, physical reality, be so untidy that scientists need general relativity theory in classical mechanics, and quantum mechanics in order to describe it? To be fair, physicists have been trying for the last fifty years to find one theory that can simply and elegantly account for both large-scale and small-scale objects, and the unification of the forces of nature, the Theory of Everything (TOE), as it is jocularly called. Einstein spent the last thirty years of his life in a vain attempt to formulate a unified theory. Now many scientists are furiously working on the most promising of them all, Superstring Theory, which is evolving into the M-Theory. But this is another fascinating story.
4.5 The Measurement Problem and Schrödinger’s Cat Paradox
The measurement problem is further highlighted by the Schrödinger’s cat paradox. Recall Heisenberg’s Uncertainty Principle, whereby it is not possible to define both simultaneously the position and momentum of a particle with an arbitrary degree of accuracy, since the definition of one necessarily entails the conditions that make the definition of the other impossible. Furthermore, since everything at the quantum level, which is fully described by its wave function, exists only as continually interacting potentialities one of which becomes realized only upon measurement, it is natural to ask, what is a measurement? Does a measurement qualify as such only when not observed by a human? And how does a quantum system select among its many potential states?
According to the Copenhagen interpretation of quantum theory formulated by the Danish physicist Niels Bohr (1885-1962), measurements are what forces a quantum system to adopt a definite state. On how the quantum system knows to select one state among many, Bohr did not elaborate. Not content with this position, a number of scientists search for a different interpretation of quantum mechanics, and their work leads some to the theories of the multiverse.
For now let us examine a paradox posed by quantum theory in the form of a Gedanken (thought) experiment dreamed up by the Austrian physicist Erwin Schrödinger (1887-1961) in 1935. In the standard interpretation of quantum mechanics, a quantum object exists in a superposition of states, which then collapses to one state when it is measured. In the famous Schrödinger experiment, of which several versions exist, a live cat is put in a closed box along with some radioactive material that has a 50-50 chance of decaying. If a Geiger counter detects decay, it releases a hammer, which breaks a flask containing poison gas, killing the cat. If there is no decay, the cat lives. Since radioactive material may cause concern even in a thought experiment, another version calls for a device that releases a photon through a filter and records its passage. If a photon passes through the filter, its detection triggers the breakage of the vial of poisonous gas. And the cat is killed. Note that this is only a thought experiment, so no real cat will die.
Here is the interesting question, what happened inside the box after the photon was released but before any human observer lifted the lid? Simple. Either the photon was detected or it was not, either the poison gas was released or it was not, and either the cat was killed or it was not. But this assumes that the passage of a photon is enough to constitute a measurement. This leads to the question, what is a measurement? In one view, a measurement is an interaction between physical systems that correlates the value of a quantity in one physical system with the value of a quantity in the other physical system. Thus, when the Geiger counter (measuring apparatus) interacts with the radioactive material (object of measurement) and detects a decayed atom, it picks up the charged particle. This process results in amplification, which entangles the microscopic system being measured to the macroscopic state of the measuring instrument, and triggers an irreversible quantum change inside the counter (an electron cascade) that causes an audible click. It is at the point when two physical systems interact with each other that the wave function collapses into one quantum state. The half-dead and half-alive quantum states cease to interact with each other, and decohere. Decoherence, the suppression of interference, causes the collapse of the wave function. As a consequence, only one observable outcome exists.
What if it takes an observer to trigger the measurement? Let us take a simplified view. In that case, the cat must be in some indeterminate state, neither dead nor alive, but with a potentiality of resolving into one or the other state, until an observer opens the box. The two dead and alive cat-states are in superposition (existing simultaneously) inside the cat, and constantly interacting with each other. In other words, there is constant, random interference of the different quantum states. But what does a half-dead half-alive cat mean? It is the linear combination of states that existed before a conscious observer makes the observation. Even if we consider the light that impinges on the observer’s eye, his retina, optic nerves and brain as part of the measuring apparatus, consciousness remains a problem, given that consciousness if not a quantum object. Since an observer is also part of the universe and part of the measuring apparatus, and undergoes quantum mechanical changes as a result of the interaction, we are confronted with a thorny issue. Einstein was said to be perturbed by the Copenhagen interpretation. Abraham Pais, in the opening of his biography of Einstein, Subtle Is the Lord, as cited by Julian Brown (p. 106), reported that while walking home with Einstein from the Institute for Advanced Study at Princeton, Einstein suddenly asked Pais if he really believed the moon exists only if he looks at it. Quantum mechanics is beginning to encroach on philosophy and metaphysics.
Now back to the collapse theory. A cat, like everything else in the universe, is made of quantum components: protons, neutrons, electrons. How then does the cat get from its quantum half-dead half-alive cat-state to the classical dead or alive cat-state? By a process variously called decoherence or reduction of the wave packet. Gell-Mann (p. 147) defines decoherence as the mechanism that makes interference terms sum to zero (suppression of interference) and permits the assignment of probabilities. It is interaction of the object with the rest of the universe that triggers decoherence.
Being dead or alive is not a quantum property but a collective attribute of all the cat’s quantum components and their constant random independent fluctuations. A complete description of a cat-state must include a specification of the quantum state of every component particle within the cat. If one particle flips from one energy state to another, the entire cat changes into a different quantum state. It is possible to conceive of a hypothetical cat with its half-dead array of quantum states constantly changing randomly and independently, and its half-alive array of quantum states doing likewise. This state of linear superposition at some point in the measurement will break down as a result of constant random interaction. The process of decoherence of quantum components of a complex object (such as a cat) causes the constant influences of, and interactions with internal and external forces, or interference, to be suppressed, so that no experiment can show what a half-dead half-alive cat looks like. Thanks to decoherence, for practical purposes, a cat is either dead or alive; it is a classical object; just like anything large (from bacteria to galaxies) is a classical or macroscopic object.
Gell-Mann (pp. 152-3) does not see the point of so much paper having been wasted about the Schrödinger cat. Even if it takes a conscious observer to determine the outcome, i.e., by opening the box, the cat’s interaction with the universe will lead to decoherence of the alternative outcomes. In this way the quantum cat behaves exactly as the classical cat, revealing itself as either dead or alive.
Before taking a closer look at the measurement problem, we review certain Quantum Mechanics (QM) concepts that will be useful in the discussion to come.
We have seen that the wave function ψ, derived from the deterministic Schrödinger’s wave equation to express the probability of finding a particle at a given point in space, is only a probability function. That is, we cannot tell exactly where a photon or electron will end up, only where it is likely to do so. The probability density |ψ|2, represented by the wave function squared, only shows the probability of finding a particle for each particular state. Since ψ is a complex number (it contains the imaginary quantity i = √ - 1), it is not directly measurable. It is not a physical entity but a mathematical structure in that it has the mathematical attributes of a wave, such as frequency, amplitude, phase. It obeys the superposition principle and thus mathematically undergoes interference and diffraction. We say that the state of a physical system is completely specified by its wave function.
According to QM, which uses Schrödinger’s equation, whose solutions are energy values, the state of a physical system (e.g., a hydrogen atom) depends on four values called quantum numbers. Because the quantum numbers describe the state of a quantum mechanical system, they thus are part of the specification of its wave function. Recall that Schrödinger’s equation is essential to QM, as much as Newton’s laws are to classical mechanics, and Maxwell’s equations are to electromagnetism. Here are the four quantum numbers.
1. The principal quantum number n. The quantum number n = 1, 2, 3,…represents the orbital energy, which is identical in the Bohr model as well as in Schrödinger’s equation:E = (-13.6 eV) / n2.
2. The orbital angular momentum quantum number l. For each principal quantum number n there is a separate orbital angular momentum quantum number l, which ranges from 0 to n – 1 in integer steps. This quantum number determines the magnitude of the total orbital angular momentum, which is expressed by the relation: L = (√ l (l + 1)) ħ. Recall that ħ = h /2π.
3. The orbital magnetic quantum number ml. This quantum number is related to the direction of the electron’s angular momentum. It gives the component of the orbital angular momentum vector along a direction, usually the z-axis, with a set of values derived by:
Lz = ml ħ
where ml = 0, ± 1, ± 2, ± 3, …,± l. This means the component of L in any direction, e.g., Lz,, is always smaller than L in item 2. Furthermore, only one component (x, y or z) of the orbital angular momentum can be known precisely at a time.
4. The spin magnetic quantum number ms. An electron spin does not refer to a measure of the spinning of the electron on its axis (like Earth’s rotation), since an electron can never be localized, and no one has ever seen or discovered by experiment any such rotation. Instead, the electron’s magnetic moment, denoted ms, and the angular momentum of the electron, denoted s, are intrinsic properties of the electron that are not depending on orbital motion, just as mass, charge, position, momentum, etc. are other properties of the electron. The spin magnetic quantum number mshas two values: + ½ called the spin-up state of the electron and - ½ called the spin-down state. And the spin angular momentum quantum number s for electrons has only one value: s = ½. For this reason, electrons are referred to as spin-½ particles.
If we combine all the four quantum numbers using their allowed values in the order given, n, l, ml,,ms,, we get a specification of the state of any atomic electron, each corresponding to a distinct energy level. Take a hydrogen atom. There are two states corresponding to n = 1 and l = 0, (these quantum numbers represent the ground state, i.e., the lowest energy level of the system) as shown below:
n = 1 l = 0 ml = 0 ms = + ½
n = 1 l = 0 ml = 0 ms = - ½
Similarly, there are six states corresponding to n = 2 and l = 1:
n = 2 l = 1 ml = 1 ms = + ½
n = 2 l = 1 ml = 1 ms = - ½
n = 2 l = 1 ml = 0 ms = + ½
n = 2 l = 1 ml = 0 ms = - ½
n = 2 l = 1 ml = -1 ms = + ½
n = 2 l = 1 ml = -1 ms = - ½
We note that the states as described by the quantum numbers just discussed form part of the wave function, which describes them as potentialities, and assigns a certain probability to each.
We now turn to a formal structure of quantum mechanics. In the standard von Neumann-Dirac theory of quantum mechanics (as cited by Jeffrey Barrett in Everett’s Relative-State Formulation of Quantum Mechanics), the following four principles hold:
1. Representation of States: The possible physical states of a system S are represented by the unit-length vectors in a Hilbert space (which for present purposes one may regard as a vector space with an inner product). The physical state at a time is then represented by a single vector in the Hilbert space.
2. Representation of Properties : For each physical property P that one might observe of a system S there is a linear (so-called projection) operator P (on the vectors that represent the possible states of S) that represents the property.
3. Eigenvalue-Eigenstate Link : A system S determinately has physical property P if and only if P operating on S (the vector representing S's state) yields S. We say then that S is in an eigenstate of P with eigenvalue 1. S determinately does not have property P if and only if P operating on S yields 0.
4. Dynamics : (a) If no measurement is made, then a system S evolves continuously according to the linear, deterministic dynamics, which depends only on the energy properties of the system. (b) If a measurement is made, then the system S instantaneously and randomly jumps to a state where it either determinately has or determinately does not have the property being measured. The probability of each possible post-measurement state is determined by the system's initial state. More specifically, the probability of ending up in a particular final state is equal to the norm squared of the projection of the initial state on the final state.
The first principle defines the Hilbert space of a physical system S with which it is associated. Every state of a physical system is represented by a vector. The length of a vector is called norm. A vector of norm 1 is a unit vector. All of the states of the system form a sequence of real numbers (a1, a2 ,… , an) called an ordered n-tuples, and the set of all ordered n-tuples is called n-space, which is denoted Rn. A set of objects V over a field F (of scalars) on which addition and multiplication of scalars are defined (or closed under addition and scalar multiplication), and which satisfy a number of conditions is called the vector space V. We call the elements in this vector space V vectors. The set V = Rn with standard addition and scalar multiplication is a vector space. Thus, if u and v are elements in V, then the object u + v exists in V, and is called the sum of u and v. For each element u in V and each element k in F there exists in V the product ku called scalar multiple of u by k.
The Hilbert space associated with each physical system is a complex vector space consisting of unit vectors associated with the system states, which are represented by the wave function ψ. A Hilbert space is a complex, infinite-dimensional, linear vector space with a scalar product. Its linearity means that the linear superposition principle holds, i.e., we can perform complex-number weightings such as ψt + ψb, ψt + b, etc. Each ‘dimension’ of the Hilbert space corresponds to one of the different physical states of the quantum system.
A few definitions are needed to understand the second principle. If V and W are two vector spaces, and F is a function that associates a unique vector in W with each vector in V, we say that F maps V into W, and write F:V→W. If F associates the vector w to the vector v, we write w = F(v), and say that w is the image of v under F. The vector space V is called the domain of F, and the vector W is called the image space of F. For example, the function F below maps R2 into R3:
F (x,y) = (x - y, x + y, 5x)
And the image of a vector v = (x,y) in R2 (the domain of F) is the vector w =(x - y, x + y, 5x) in R3 (the image space of F).
If the function F:V→W is a function from the vector space V into the vector space W, then F is called a linear transformation (or linear operator ) if:
(a) F (u + v) = F(u) + F(v) for all vectors u and v in V
(b) F (ku) = kF(u) for all vectors u in V and all scalars k.
The second principle says that physical quantities or properties (also known as observables) in the Hilbert space are represented by linear operators, as just defined. These operators are also called Hermitian operators, which have only real eigenvalues.
Now let us define the eigenvector of a linear operator and its eigenvalue. If T:V→V is a linear operator on vector space V, then a nonzero vector x in V is called an eigenvector of T if there exists a scalar λ such that Tx = λx. The scalar λ is called the eigenvalue of T corresponding to the vector x. In the context of quantum mechanics, the Hermitian operator is the physical quantity being observed, and its eigenvalue obtained is the outcome of the measurement.
By the third principle in order for a property to be in a system the vector representing the eigenstate of the system must be in the state space of the property. In which case this vector is parallel to the eigenvector of the property. If the vector representing the state of the system is not in the state space of the property, i.e., it is perpendicular or orthogonal to it, the property is not in the system. Since most state vectors are neither parallel nor orthogonal, a typical system neither determinately has nor determinately does not have a given property. A more formal statement of this axiom says that the Hilbert space associated with a complex system (observer + object system) is the tensor product of those associated with the simple systems (in the standard, non-relativistic, theory: the individual particles) of which it is composed.
A tensor product of two vector spaces is a way of creating another vector space. It is analogous to multiplication of integers, and obeys the associative multiplication and additive laws. Likewise a tensor product of two Hilbert spaces is another Hilbert space.
Finally, the last axiom posits the existence of two contexts for any system, one when the system is not under measurement, and one when the system is under measurement. When there is no measurement, the deterministic, linear Schrödinger’s equation takes the state at one time into a unique state at another time (a process called time evolution). The equation’s linearity allows for post-measurement superposition. After measurement of an observable on a system the wave function collapses into an eigenstate corresponding to the eigenvalue observed with a certain probability assigned to it in accordance to the collapse postulate.
Another presentation of the measurement formalism attributed to von Neumann is given by Henry Krip in Measurement in Quantum Theory as follows:
Von Neumann also intervened decisively into the measurement problem. Summarizing earlier work, he argued that a measurement on a quantum system involves two distinct processes that may be thought of as temporally contiguous stages… In the first stage, the measured quantum system S interacts with M, a macroscopic measuring apparatus for the physical quantity Q. This interaction is governed by the linear, deterministic Schrödinger equation, and is represented in the following terms: at time t, when the measurement begins, S, the measured system, is in a state represented by a Hilbert space vector f that, like any vector in the Hilbert space of possible state vectors, is decomposable into a weighted sum - a "linear superposition" - of the set of so-called "eigenvectors" {fi} belonging to Q. In other words, f = ∑ ci fi for some set {ci} of complex numbers. fi, the eigenvector of Q corresponding to possible value qi, is that state of S at t for which, when S is in that state, there is unit probability that Q has value qi. M, the measuring apparatus, is taken to be in a "ready" state g at time t when the measurement begins. According to the laws of QM, this entails that S+M at t is in the "tensor product" state ∑ ci fi g.
By applying the Schrödinger equation to this product state, we deduce that at time t', when the first stage of the measurement terminates, the state of S+M is ∑ ci fi gi, where gi is a state in which M registers the value qi. Such states, represented by a linear combination of products of the form fi gi, have been dubbed "entangled states".
After the first stage of the measurement process, a second non-linear, indeterministic process takes place, the "reduction of the wave packet", that involves S+M "jumping" (the famous "quantum leap") from the entangled state ∑ ci fi gi into the state fi gi for some i. This, in turn (according to the laws of QM) means that S is in state fi and M is in the state gi, where gi, it is assumed, is the state in which M registers the value qi. Let t" denote the time when this second and final stage of the measurement is finished. It follows that at t", when the measurement as a whole terminates, M registers the value qi. Since the reduction of the wave packet is indeterministic, there is no possibility of predicting which value M will register at t". We can conclude only that M will register some value.
The second stage of the measurement, with its radical, non-linear discontinuities, was from its introduction the source of many of the philosophical difficulties that plagued QM, including what von Neumann referred to as its "peculiar dual nature."
The above passage, with its mathematical jargon simplified, describes von Neumann’s view of the measurement process as follows. Measurement consists of two contiguous phases. In the first phase, a quantum system S whose physical property Q (the observable) is being measured by (i.e., interacts with) the macroscopic measuring apparatus M. This interaction obeys the linear, deterministic Schrödinger equation, which describes the state of the system S at the initial time as a vector made of a set of superposed vectors which represent the possible values of the quantity Q (a linear superposition of eigenvectors in the Hilbert space). Among these the possible value qi of Q represents the state of S that is certain to be registered by M when the system S is in that state. M is in the “ready” state at this time. The linear combination of S and M is now linked as S+M in a state in which all the values of Q have each a probability of being realized into one measured state.
When the first stage of measurement terminates, by the Schrödinger equation, the state of S+M is such that each eigenvector fi of Q is associated with a value gi of M, gi being the state in which M registers the value qi. The S+M is a complex system tied by correlation. We call this linear combination the “entangled states.”
After measurement the second and final non-linear, indeterministic process occurs which induces S+M to jump (quantum leap) from its entangled state to a state where M has registered the outcome of measurement, namely the value qi of Q. Since the process of reduction of the wave packet is indeterministic, we cannot predict which value M will register.
Before leaving the topic of measurement, we quote another view of the problem from Guido Bacciagaluppi, in The Role of Decoherence in Quantum Theory.
Quantum mechanical systems are described by wave-like mathematical objects (vectors) of which sums (superpositions) can be formed. Time evolution (the Schrödinger equation) preserves such sums. Thus, if a quantum mechanical system is described by a superposition of two given states, say, spin in x-direction equal + ½ and spin in x-direction equal - ½, and we let it interact with a measuring apparatus that couples to these states, the final quantum state of the composite will be a sum of two components, one in which the apparatus has coupled to (has registered) x-spin = + ½, and one in which the apparatus has coupled to (has registered) x-spin = - ½. The problem is that while we may accept the idea of microscopic systems being described by such sums, we cannot even begin to imagine what it would mean for the (composite of electron and) apparatus to be so described.
Now, what happens if we include decoherence in the description? Decoherence tells us, among other things, that there are plenty of interactions in which differently localised states of macroscopic systems couple to different states of their environment. In particular, the differently localised states of the macroscopic system could be the states of the pointer of the apparatus registering the different x-spin values of the electron. By the same argument as above, the composite of electron, apparatus and environment will be a sum of a state corresponding to the environment coupling to the apparatus coupling in turn to the value + ½ for the spin, and of a state corresponding to the environment coupling to the apparatus coupling in turn to the value - ½ for the spin. So again we cannot imagine what it would mean for the composite system to be described by such a sum.
It is this stage of the measurement that caused a continuing controversy among physicists and philosophers of physics. The measurement problem will remain the conceptual difficulty with quantum mechanics.
At this point the reader must have seen that quantum theory is still very much under discussion and debate. The theory at times is quite counterintuitive and bizarre. Yet, it is the only theory that can stand the test of observation and experimentation at the microscopic level. Fortunately most scientists are not bothered by counterintuitive or bizarre theories, so long as they can withstand the assault of counterevidence and empirical challenge. Quantum theory has so far passed muster.
To see how the quantum theory has given rise to the concept of the multiverse, quantum computing, and more, continue the discussion with Hugh Everett III, Max Tegmark, and David Deutsch in Quantum Mechanics, Part II.
Quatum Mechanics Part II-The Multiverse
Mathematics of Finance, Financial System
Mathematics of Finance, Problems
The Universe, Part I
The Universe, Part II
Mars Exploration Rover Spirit
Mars Exploration Rover Opportunity
Copyright © 2004-2015 Thomas D. Le All Rights Reserved
This site is continually updated. |
e471bc89183de563 | From Wikipedia, the free encyclopedia
(Redirected from Deuteron)
Jump to: navigation, search
Name, symbol Hydrogen-2,2H or D
Neutrons 1
Protons 1
Nuclide data
Natural abundance 0.0156% (Earth)
Isotope mass 2.01410178 u
Spin 1+
Excess energy 13135.720± 0.001 keV
Binding energy 2224.52± 0.20 keV
Deuterium (symbol D or 2H, also known as heavy hydrogen) is one of two stable isotopes of hydrogen. The nucleus of deuterium, called a deuteron, contains one proton and one neutron, whereas the far more common hydrogen isotope, protium, has no neutron in the nucleus. It has a natural abundance in Earth's oceans of about one atom in 6,420 of hydrogen. Thus deuterium accounts for approximately 0.0156% (or on a mass basis: 0.0312%) of all the naturally occurring hydrogen in the oceans, while the most common isotope (hydrogen-1 or protium) accounts for more than 99.98%. The abundance of deuterium changes slightly from one kind of natural water to another (see Vienna Standard Mean Ocean Water).
The deuterium isotope's name is formed from the Greek deuteros meaning "second", to denote the two particles composing the nucleus.[1] Deuterium was discovered and named in 1931 by Harold Urey, earning him a Nobel Prize in 1934. This was followed by the discovery of the neutron in 1932, which made the nuclear structure of deuterium obvious. Soon after deuterium's discovery, Urey and others produced samples of "heavy water" in which the deuterium has been highly concentrated with respect to the protium.
Because deuterium is destroyed in the interiors of stars faster than it is produced, and because other natural processes are thought to produce only an insignificant amount of deuterium, it is thought that nearly all deuterium found in nature was produced in the Big Bang 13.8 billion years ago, and that the basic or primordial ratio of hydrogen-1 (protium) to deuterium (about 26 atoms of deuterium per million hydrogen atoms) has its origin from that time. This is the ratio found in the gas giant planets, such as Jupiter (see references 2,3 and 4). However, different astronomical bodies are found to have different ratios of deuterium to hydrogen-1, and this is thought to be as a result of natural isotope separation processes that occur from solar heating of ices in comets. Like the water-cycle in Earth's weather, such heating processes may enrich deuterium with respect to protium. In fact, the discovery of deuterium/protium ratios in a number of comets very similar to the mean ratio in Earth's oceans (156 atoms of deuterium per million hydrogens) has led to theories that much of Earth's ocean water has a cometary origin.[2][3] The deuterium/protium ratio of the comet 67P/Churyumov-Gerasimenko as measured by the Rosetta space probe is about three times that of earth water, a figure that is the highest yet measured in a comet.[4]
Deuterium/protium ratios thus continue to be an active topic of research in both astronomy and climatology.
Differences between deuterium and common hydrogen (protium)[edit]
Chemical symbol[edit]
Deuterium discharge tube
Deuterium is frequently represented by the chemical symbol D. Since it is an isotope of hydrogen with mass number 2, it is also represented by 2H. IUPAC allows both D and 2H, although 2H is preferred.[5] A distinct chemical symbol is used for convenience because of the isotope's common use in various scientific processes. Also, its large mass difference with protium (1H) (deuterium has a mass of 2.014102 u, compared to the mean hydrogen atomic weight of 1.007947 u, and protium's mass of 1.007825 u) confers non-negligible chemical dissimilarities with protium-containing compounds, whereas the isotope weight ratios within other chemical elements are largely insignificant in this regard.
In quantum mechanics the energy levels of electrons in atoms depend on the reduced mass of the system of electron and nucleus. For the hydrogen atom, the role of reduced mass is most simply seen in the Bohr model of the atom, where the reduced mass appears in a simple calculation of the Rydberg constant and Rydberg equation, but the reduced mass also appears in the Schrödinger equation, and the Dirac equation for calculating atomic energy levels.
The reduced mass of the system in these equations is close to the mass of a single electron, but differs from it by a small amount about equal to the ratio of mass of the electron to the atomic nucleus. For hydrogen, this amount is about 1837/1836, or 1.000545, and for deuterium it is even smaller: 3671/3670, or 1.0002725. The energies of spectroscopic lines for deuterium and light-hydrogen (hydrogen-1) therefore differ by the ratios of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the corresponding lines of light hydrogen, by a factor of 1.000272. In astronomical observation, this corresponds to a blue Doppler shift of 0.000272 times the speed of light, or 81.6 km/s.[6]
The differences are much more pronounced in vibrational spectroscopy such as infrared spectroscopy and Raman spectroscopy,[1] and in rotational spectra such as microwave spectroscopy because the reduced mass of the deuterium is markedly higher than that of protium.
Deuterium and Big Bang nucleosynthesis[edit]
Deuterium is thought to have played an important role in setting the number and ratios of the elements that were formed in the Big Bang. Combining thermodynamics and the changes brought about by cosmic expansion, one can calculate the fraction of protons and neutrons based on the temperature at the point that the universe cooled enough to allow formation of nuclei. This calculation indicates seven protons for every neutron at the beginning of nucleogenesis, a ratio that would remain stable even after nucleogenesis was over. This fraction was in favor of protons initially, primarily because the lower mass of the proton favored their production. As the universe expanded, it cooled. Free neutrons and protons are less stable than helium nuclei, and the protons and neutrons had a strong energetic reason to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium.
Through much of the few minutes after the big bang during which nucleosynthesis could have occurred, the temperature was high enough that the mean energy per particle was greater than the binding energy of weakly bound deuterium; therefore any deuterium that was formed was immediately destroyed. This situation is known as the deuterium bottleneck. The bottleneck delayed formation of any helium-4 until the universe became cool enough to form deuterium (at about a temperature equivalent to 100 keV). At this point, there was a sudden burst of element formation (first deuterium, which immediately fused to helium). However, very shortly thereafter, at twenty minutes after the Big Bang, the universe became too cool for any further nuclear fusion and nucleosynthesis to occur. At this point, the elemental abundances were nearly fixed, with the only change as some of the radioactive products of big bang nucleosynthesis (such as tritium) decay.[7] The deuterium bottleneck in the formation of helium, together with the lack of stable ways for helium to combine with hydrogen or with itself (there are no stable nuclei with mass numbers of five or eight) meant that insignificant carbon, or any elements heavier than carbon, formed in the Big Bang. These elements thus required formation in stars. At the same time, the failure of much nucleogenesis during the Big Bang ensured that there would be plenty of hydrogen in the later universe available to form long-lived stars, such as our Sun.
Deuterium occurs in trace amounts naturally as deuterium gas, written 2H2 or D2, but most natural occurrence in the universe is bonded with a typical 1H atom, a gas called hydrogen deuteride (HD or 1H2H).[8]
The existence of deuterium on Earth, elsewhere in the solar system (as confirmed by planetary probes), and in the spectra of stars, is also an important datum in cosmology. Gamma radiation from ordinary nuclear fusion dissociates deuterium into protons and neutrons, and there are no known natural processes other than the Big Bang nucleosynthesis, which might have produced deuterium at anything close to the observed natural abundance of deuterium (deuterium is produced by the rare cluster decay, and occasional absorption of naturally occurring neutrons by light hydrogen, but these are trivial sources). There is thought to be little deuterium in the interior of the Sun and other stars, as at temperatures there nuclear fusion reactions that consume deuterium happen much faster than the proton-proton reaction that creates deuterium. However, deuterium persists in the outer solar atmosphere at roughly the same concentration as in Jupiter, and this has probably been unchanged since the origin of the Solar System. The natural abundance of deuterium seems to be a very similar fraction of hydrogen, wherever hydrogen is found, unless there are obvious processes at work that concentrate it.
The existence of deuterium at a low but constant primordial fraction in all hydrogen is another one of the arguments in favor of the Big Bang theory over the Steady State theory of the universe. The observed ratios of hydrogen to helium to deuterium in the universe are difficult to explain except with a Big Bang model. It is estimated that the abundances of deuterium have not evolved significantly since their production about 13.8 bya.[9] Measurements of Milky Way galactic deuterium from ultraviolet spectral analysis show a ratio of as much as 23 atoms of deuterium per million hydrogen atoms in undisturbed gas clouds, which is only 15% below the WMAP estimated primordial ratio of about 27 atoms per million from the Big Bang. This has been interpreted to mean that less deuterium has been destroyed in star formation in our galaxy than expected, or perhaps deuterium has been replenished by a large in-fall of primordial hydrogen from outside the galaxy.[10] In space a few hundred light years from the Sun, deuterium abundance is only 15 atoms per million, but this value is presumably influenced by differential adsorption of deuterium onto carbon dust grains in interstellar space.[11]
The abundance of deuterium in the atmosphere of Jupiter has been directly measured by the Galileo space probe as 26 atoms per million hydrogen atoms. ISO-SWS observations find 22 atoms per million hydrogen atoms in Jupiter.[12] and this abundance is thought to represent close to the primordial solar system ratio.[3] This is about 17% of the terrestrial deuterium-to-hydrogen ratio of 156 deuterium atoms per million hydrogen atoms.
Cometary bodies such as Comet Hale Bopp and Halley's Comet have been measured to contain relatively more deuterium (about 200 atoms D per million hydrogens), ratios which are enriched with respect to the presumed protosolar nebula ratio, probably due to heating, and which are similar to the ratios found in Earth seawater. The recent measurement of deuterium amounts of 161 atoms D per million hydrogen in Comet 103P/Hartley (a former Kuiper belt object), a ratio almost exactly that in Earth's oceans, emphasizes the theory that Earth's surface water may be largely comet-derived.[2][3] Most recently the deuterium/protium (D/H) ratio of 67P/Churyumov-Gerasimenko as measured by Rosetta is about three times that of earth water, a figure that is high.[4] This has caused renewed interest in suggestions that Earth's water may be partly of asteroidal origin.
Deuterium has also observed to be concentrated over the mean solar abundance in other terrestrial planets, in particular Mars and Venus.
Deuterium is produced for industrial, scientific and military purposes, by starting with ordinary water—a small fraction of which is naturally-occurring heavy water—and then separating out the heavy water by the Girdler sulfide process, distillation, or other methods.
In theory, deuterium for heavy water could be created in a nuclear reactor, but separation from ordinary water is the cheapest bulk production process.
The world's leading supplier of deuterium was Atomic Energy of Canada Limited, in Canada, until 1997, when the last heavy water plant was shut down. Canada uses heavy water as a neutron moderator for the operation of the CANDU reactor design.
Physical properties[edit]
The physical properties of deuterium compounds can exhibit significant kinetic isotope effects and other physical and chemical property differences from the hydrogen analogs; for example, D2O is more viscous than H2O.[13] Chemically, deuterium behaves similarly to ordinary hydrogen, but there are differences in bond energy and length for compounds of heavy hydrogen isotopes which are larger than the isotopic differences in any other element. Bonds involving deuterium and tritium are somewhat stronger than the corresponding bonds in hydrogen, and these differences are enough to make significant changes in biological reactions.
Deuterium can replace the normal hydrogen in water molecules to form heavy water (D2O), which is about 10.6% denser than normal water (enough that ice made from it sinks in ordinary water). Heavy water is slightly toxic in eukaryotic animals, with 25% substitution of the body water causing cell division problems and sterility, and 50% substitution causing death by cytotoxic syndrome (bone marrow failure and gastrointestinal lining failure). Prokaryotic organisms, however, can survive and grow in pure heavy water (though they grow more slowly).[14] Consumption of heavy water does not pose a health threat to humans, it is estimated that a 70 kg person might drink 4.8 liters of heavy water without serious consequences.[15] Small doses of heavy water (a few grams in humans, containing an amount of deuterium comparable to that normally present in the body) are routinely used as harmless metabolic tracers in humans and animals.
Quantum properties[edit]
The triplet deuteron nucleon is barely bound at EB = 2.23 MeV, so all the higher energy states are not bound. The singlet deuteron is a virtual state, with a negative binding energy of ~60 keV. There is no such stable particle, but this virtual particle transiently exists during neutron-proton inelastic scattering, accounting for the unusually large neutron scattering cross-section of the proton.[16]
Nuclear properties (the deuteron)[edit]
Deuteron mass and radius[edit]
The nucleus of deuterium is called a deuteron. It has a mass of 2.013553212724(78) u[17] The charge radius of the deuteron is 2.1402(28) fm[18]
Spin and energy[edit]
Deuterium is one of only five stable nuclides with an odd number of protons and odd number of neutrons. (2H, 6Li, 10B, 14N, 180mTa; also, the long-lived radioactive nuclides 40K, 50V, 138La, 176Lu occur naturally.) Most odd-odd nuclei are unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects. Deuterium, however, benefits from having its proton and neutron coupled to a spin-1 state, which gives a stronger nuclear attraction; the corresponding spin-1 state does not exist in the two-neutron or two-proton system, due to the Pauli exclusion principle which would require one or the other identical particle with the same spin to have some other different quantum number, such as orbital angular momentum. But orbital angular momentum of either particle gives a lower binding energy for the system, primarily due to increasing distance of the particles in the steep gradient of the nuclear force. In both cases, this causes the diproton and dineutron nucleus to be unstable.
The proton and neutron making up deuterium can be dissociated through neutral current interactions with neutrinos. The cross section for this interaction is comparatively large, and deuterium was successfully used as a neutrino target in the Sudbury Neutrino Observatory experiment.
Isospin singlet state of the deuteron[edit]
Due to the similarity in mass and nuclear properties between the proton and neutron, they are sometimes considered as two symmetric types of the same object, a nucleon. While only the proton has an electric charge, this is often negligible due to the weakness of the electromagnetic interaction relative to the strong nuclear interaction. The symmetry relating the proton and neutron is known as isospin and denoted I (or sometimes T).
Isospin is an SU(2) symmetry, like ordinary spin, so is completely analogous to it. The proton and neutron form an isospin doublet, with a "down" state (↓) being a neutron, and an "up" state (↑) being a proton.
A pair of nucleons can either be in an antisymmetric state of isospin called singlet, or in a symmetric state called triplet. In terms of the "down" state and "up" state, the singlet is
\frac{1}{\sqrt{2}}\Big( |\uparrow \downarrow \rangle - |\downarrow \uparrow \rangle\Big).
This is a nucleus with one proton and one neutron, i.e. a deuterium nucleus. The triplet is
\frac{1}{\sqrt{2}}( |\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle )\\
and thus consists of three types of nuclei, which are supposed to be symmetric: a deuterium nucleus (actually a highly excited state of it), a nucleus with two protons, and a nucleus with two neutrons. The latter two nuclei are not stable or nearly stable, and therefore so is this type of deuterium (meaning that it is indeed a highly excited state of deuterium).
Approximated wavefunction of the deuteron[edit]
The deuteron wavefunction must be antisymmetric if the isospin representation is used (since a proton and a neutron are not identical particles, the wavefunction need not be antisymmetric in general). Apart from their isospin, the two nucleons also have spin and spatial distributions of their wavefunction. The latter is symmetric if the deuteron is symmetric under parity (i.e. have an "even" or "positive" parity), and antisymmetric if the deuteron is antisymmetric under parity (i.e. have an "odd" or "negative" parity). The parity is fully determined by the total orbital angular momentum of the two nucleons: if it is even then the parity is even (positive), and if it is odd then the parity is odd (negative).
The deuteron, being an isospin singlet, is antisymmetric under nucleons exchange due to isospin, and therefore must be symmetric under the double exchange of their spin and location. Therefore it can be in either of the following two different states:
• Symmetric spin and symmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (+1) from spin exchange and (+1) from parity (location exchange), for a total of (−1) as needed for antisymmetry.
• Antisymmetric spin and antisymmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (−1) from spin exchange and (−1) from parity (location exchange), again for a total of (−1) as needed for antisymmetry.
In the first case the deuteron is a spin triplet, so that its total spin s is 1. It also has an even parity and therefore even orbital angular momentum l ; The lower its orbital angular momentum, the lower its energy. Therefore the lowest possible energy state has s = 1, l = 0.
In the second case the deuteron is a spin singlet, so that its total spin s is 0. It also has an odd parity and therefore odd orbital angular momentum l. Therefore the lowest possible energy state has s = 0, l = 1.
Since s = 1 gives a stronger nuclear attraction, the deuterium ground state is in the s =1, l = 0 state.
The same considerations lead to the possible states of an isospin triplet having s = 0, l = even or s = 1, l = odd. Thus the state of lowest energy has s = 1, l = 1, higher than that of the isospin singlet.
The analysis just given is in fact only approximate, both because isospin is not an exact symmetry, and more importantly because the strong nuclear interaction between the two nucleons is related to angular momentum in spin-orbit interaction that mixes different s and l states. That is, s and l are not constant in time (they do not commute with the Hamiltonian), and over time a state such as s = 1, l = 0 may become a state of s = 1, l = 2. Parity is still constant in time so these do not mix with odd l states (such as s = 0, l = 1). Therefore the quantum state of the deuterium is a superposition (a linear combination) of the s = 1, l = 0 state and the s = 1, l = 2 state, even though the first component is much bigger. Since the total angular momentum j is also a good quantum number (it is a constant in time), both components must have the same j, and therefore j = 1. This is the total spin of the deuterium nucleus.
To summarize, the deuterium nucleus is antisymmetric in terms of isospin, and has spin 1 and even (+1) parity. The relative angular momentum of its nucleons l is not well defined, and the deuteron is a superposition of mostly l = 0 with some l = 2.
Magnetic and electric multipoles[edit]
In order to find theoretically the deuterium magnetic dipole moment µ, one uses the formula for a nuclear magnetic moment
\mu =
{1\over (j+1)}\langle(l,s),j,m_j=j|\overrightarrow{\mu}\cdot \overrightarrow{j}|(l,s),j,m_j=j\rangle
\overrightarrow{\mu} = g^{(l)}\overrightarrow{l} + g^{(s)}\overrightarrow{s}
g(l) and g(s) are g-factors of the nucleons.
Since the proton and neutron have different values for g(l) and g(s), one must separate their contributions. Each gets half of the deuterium orbital angular momentum \overrightarrow{l} and spin \overrightarrow{s}. One arrives at
\mu =
{1\over (j+1)}\langle(l,s),j,m_j=j|\left({1\over 2}\overrightarrow{l} {g^{(l)}}_p + {1\over 2}\overrightarrow{s} ({g^{(s)}}_p + {g^{(s)}}_n)\right)\cdot \overrightarrow{j}|(l,s),j,m_j=j\rangle
where subscripts p and n stand for the proton and neutron, and g(l)n = 0.
By using the same identities as here and using the value g(l)p = µN, we arrive at the following result, in nuclear magneton units
\mu =
{1\over 4 (j+1)}\left[({g^{(s)}}_p + {g^{(s)}}_n)\big(j(j+1) - l(l+1) + s(s+1)\big) + \big(j(j+1) + l(l+1) - s(s+1)\big)\right]
For the s = 1, l = 0 state (j = 1), we obtain
\mu = {1\over 2}({g^{(s)}}_p + {g^{(s)}}_n) = 0.879
For the s = 1, l = 2 state (j = 1), we obtain
\mu = -{1\over 4}({g^{(s)}}_p + {g^{(s)}}_n) + {3\over 4} = 0.310
The measured value of the deuterium magnetic dipole moment, is 0.857 µN, which is 97.5% of the 0.879 µN value obtained by simply adding moments of the proton and neutron. This suggests that the state of the deuterium is indeed to a good approximation s = 1, l = 0 state, which occurs with both nucleons spinning in the same direction, but their magnetic moments subtracting because of the neutron's negative moment.
But the slightly lower experimental number than that which results from simple addition of proton and (negative) neutron moments shows that deuterium is actually a linear combination of mostly s = 1, l = 0 state with a slight admixture of s = 1, l = 2 state.
The electric dipole is zero as usual.
The measured electric quadrupole of the deuterium is 0.2859 e·fm2. While the order of magnitude is reasonable, since the deuterium radius is of order of 1 femtometer (see below) and its electric charge is e, the above model does not suffice for its computation. More specifically, the electric quadrupole does not get a contribution from the l =0 state (which is the dominant one) and does get a contribution from a term mixing the l =0 and the l =2 states, because the electric quadrupole operator does not commute with angular momentum.
The latter contribution is dominant in the absence of a pure l = 0 contribution, but cannot be calculated without knowing the exact spatial form of the nucleons wavefunction inside the deuterium.
Higher magnetic and electric multipole moments cannot be calculated by the above model, for similar reasons.
Ionized deuterium in a fusor reactor giving off its characteristic pinkish-red glow
Emission spectrum of an ultraviolet deuterium arc lamp
Deuterium has a number of commercial and scientific uses. These include:
Nuclear reactors[edit]
Deuterium is used in heavy water moderated fission reactors, usually as liquid D2O, to slow neutrons without high neutron absorption of ordinary hydrogen.[19] This is a common commercial use for larger amounts of deuterium.
In research reactors, liquid D2 is used in cold sources to moderate neutrons to very low energies and wavelengths appropriate for scattering experiments.
Experimentally, deuterium is the most common nuclide used in nuclear fusion reactor designs, especially in combination with tritium, because of the large reaction rate (or nuclear cross section) and high energy yield of the D–T reaction. There is an even higher-yield D–3He fusion reaction, though the breakeven point of D–3He is higher than that of most other fusion reactions; together with the scarcity of 3He, this makes it implausible as a practical power source until at least D–T and D–D fusion reactions have been performed on a commercial scale. However, commercial nuclear fusion is not yet an accomplished technology.
NMR spectroscopy[edit]
Deuterium is useful in hydrogen nuclear magnetic resonance spectroscopy (proton NMR). NMR ordinarily requires compounds of interest to be analyzed as dissolved in solution. Because of deuterium's nuclear spin properties which differ from the light hydrogen usually present in organic molecules, NMR spectra of hydrogen/protium are highly differentiable from that of deuterium, and in practice deuterium is not "seen" by an NMR instrument tuned to light-hydrogen. Deuterated solvents (including heavy water, but also compounds like deuterated chloroform, CDCl3) are therefore routinely used in NMR spectroscopy, in order to allow only the light-hydrogen spectra of the compound of interest to be measured, without solvent-signal interference.
Deuterium NMR spectra are especially informative in the solid state because of its relatively small quadrupole moment in comparison with those of bigger quadrupolar nuclei such as chlorine-35, for example.
In chemistry, biochemistry and environmental sciences, deuterium is used as a non-radioactive, stable isotopic tracer, for example, in the doubly labeled water test. In chemical reactions and metabolic pathways, deuterium behaves somewhat similarly to ordinary hydrogen (with a few chemical differences, as noted). It can be distinguished from ordinary hydrogen most easily by its mass, using mass spectrometry or infrared spectrometry. Deuterium can be detected by femtosecond infrared spectroscopy, since the mass difference drastically affects the frequency of molecular vibrations; deuterium-carbon bond vibrations are found in locations free of other signals.
Measurements of small variations in the natural abundances of deuterium, along with those of the stable heavy oxygen isotopes 17O and 18O, are of importance in hydrology, to trace the geographic origin of Earth's waters. The heavy isotopes of hydrogen and oxygen in rainwater (so-called meteoric water) are enriched as a function of the environmental temperature of the region in which the precipitation falls (and thus enrichment is related to mean latitude). The relative enrichment of the heavy isotopes in rainwater (as referenced to mean ocean water), when plotted against temperature falls predictably along a line called the global meteoric water line (GMWL). This plot allows samples of precipitation-originated water to be identified along with general information about the climate in which it originated. Evaporative and other processes in bodies of water, and also ground water processes, also differentially alter the ratios of heavy hydrogen and oxygen isotopes in fresh and salt waters, in characteristic and often regionally distinctive ways.[20] The ratio of concentration of 2H to 1H is usually indicated with a delta as δ2H and the geographic patterns of these values are plotted in maps termed as isoscapes. Stable isotope are incorporated into plants and animals and an analysis of the ratios in a migrant bird or insect can help suggest a rough guide to their origins.[21][22]
Contrast properties[edit]
Neutron scattering techniques particularly profit from availability of deuterated samples: The H and D cross sections are very distinct and different in sign, which allows contrast variation in such experiments. Further, a nuisance problem of ordinary hydrogen is its large incoherent neutron cross section, which is nil for D. The substitution of deuterium atoms for hydrogen atoms thus reduces scattering noise.
Hydrogen is an important and major component in all materials of organic chemistry and life science, but it barely interacts with X-rays. As hydrogen (and deuterium) interact strongly with neutrons, neutron scattering techniques, together with a modern deuteration facility,[23] fills a niche in many studies of macromolecules in biology and many other areas.
Nuclear weapons[edit]
This is discussed below. It is notable that although most stars, including the Sun, generate energy over most of their lives by fusing hydrogen into heavier elements, such fusion of light hydrogen (protium) has never been successful in the conditions attainable on Earth. Thus, all artificial fusion, including the hydrogen fusion that occurs in so-called hydrogen bombs, requires heavy hydrogen (either tritium or deuterium, or both) in order for the process to work.
Main article: deuterated drug
Suggested neurological effects of natural abundance variation[edit]
The natural deuterium content of water has been suggested from preliminary correlative epidemiology to influence the incidence of affective disorder-related pathophysiology and major depression, which might be mediated by the serotonergic mechanisms.[24]
Suspicion of lighter element isotopes[edit]
The existence of nonradioactive isotopes of lighter elements had been suspected in studies of neon as early as 1913, and proven by mass spectrometry of light elements in 1920. The prevailing theory at the time, however, was that the isotopes were due to the existence of differing numbers of "nuclear electrons" in different atoms of an element. It was expected that hydrogen, with a measured average atomic mass very close to 1 u, the known mass of the proton, always had a nucleus composed of a single proton (a known particle), and therefore could not contain any nuclear electrons without losing its charge entirely. Thus, hydrogen could have no heavy isotopes.
Deuterium detected[edit]
Harold Urey
It was first detected spectroscopically in late 1931 by Harold Urey, a chemist at Columbia University. Urey's collaborator, Ferdinand Brickwedde, distilled five liters of cryogenically produced liquid hydrogen to mL of liquid, using the low-temperature physics laboratory that had recently been established at the National Bureau of Standards in Washington, D.C. (now the National Institute of Standards and Technology). The technique had previously been used to isolate heavy isotopes of neon. The cryogenic boiloff technique concentrated the fraction of the mass-2 isotope of hydrogen to a degree that made its spectroscopic identification unambiguous.[25][26]
Naming of the isotope and Nobel Prize[edit]
Urey created the names protium, deuterium, and tritium in an article published in 1934. The name is based in part on advice from G. N. Lewis who had proposed the name "deutium". The name is derived from the Greek deuteros (second), and the nucleus to be called "deuteron" or "deuton". Isotopes and new elements were traditionally given the name that their discoverer decided. Some British chemists, like Ernest Rutherford, wanted the isotope to be called "diplogen", from the Greek diploos (double), and the nucleus to be called diplon.[1]
The amount inferred for normal abundance of this heavy isotope of hydrogen was so small (only about 1 atom in 6400 hydrogen atoms in ocean water (156 deuteriums per million hydrogens)) that it had not noticeably affected previous measurements of (average) hydrogen atomic mass. This explained why it hadn't been experimentally suspected before. Urey was able to concentrate water to show partial enrichment of deuterium. Lewis had prepared the first samples of pure heavy water in 1933. The discovery of deuterium, coming before the discovery of the neutron in 1932, was an experimental shock to theory, but when the neutron was reported, making deuterium's existence more explainable, deuterium won Urey the Nobel Prize in chemistry in 1934. Lewis was embittered by being passed over for this recognition given to his former student.[1]
"Heavy water" experiments in World War II[edit]
Main article: Heavy water
Shortly before the war, Hans von Halban and Lew Kowarski moved their research on neutron moderation from France to England, smuggling the entire global supply of heavy water (which had been made in Norway) across in twenty-six steel drums.[27][28]
During World War II, Nazi Germany was known to be conducting experiments using heavy water as moderator for a nuclear reactor design. Such experiments were a source of concern because they might allow them to produce plutonium for an atomic bomb. Ultimately it led to the Allied operation called the "Norwegian heavy water sabotage", the purpose of which was to destroy the Vemork deuterium production/enrichment facility in Norway. At the time this was considered important to the potential progress of the war.
After World War II ended, the Allies discovered that Germany was not putting as much serious effort into the program as had been previously thought. They had been unable to sustain a chain reaction. The Germans had completed only a small, partly built experimental reactor (which had been hidden away). By the end of the war, the Germans did not even have a fifth of the amount of heavy water needed to run the reactor[clarification needed], partially due to the Norwegian heavy water sabotage operation. However, even had the Germans succeeded in getting a reactor operational (as the U.S. did with a graphite reactor in late 1942), they would still have been at least several years away from development of an atomic bomb with maximal effort. The engineering process, even with maximal effort and funding, required about two and a half years (from first critical reactor to bomb) in both the U.S. and U.S.S.R, for example.
Deuterium in thermonuclear weapons[edit]
A view of the Sausage device casing of the Ivy Mike hydrogen bomb, with its instrumentation and cryogenic equipment attached. This bomb held a cryogenic Dewar flask containing room for as much as 160 kilograms of liquid deuterium. The bomb was 20 feet tall. Note the seated man at the right of the photo for the scale.
The 62-ton Ivy Mike device built by the United States and exploded on 1 November 1952, was the first fully successful "hydrogen bomb" or thermonuclear bomb. In this context, it was the first bomb in which most of the energy released came from nuclear reaction stages that followed the primary nuclear fission stage of the atomic bomb. The Ivy Mike bomb was a factory-like building, rather than a deliverable weapon. At its center, a very large cylindrical, insulated vacuum flask or cryostat, held cryogenic liquid deuterium in a volume of about 1000 liters (160 kilograms in mass, if this volume had been completely filled). Then, a conventional atomic bomb (the "primary") at one end of the bomb was used to create the conditions of extreme temperature and pressure that were needed to set off the thermonuclear reaction.
Within a few years, so-called "dry" hydrogen bombs were developed that did not need cryogenic hydrogen. Released information suggests that all thermonuclear weapons built since then contain chemical compounds of deuterium and lithium in their secondary stages. The material that contains the deuterium is mostly lithium deuteride, with the lithium consisting of the isotope lithium-6. When the lithium-6 is bombarded with fast neutrons from the atomic bomb, tritium (hydrogen-3) is produced, and then the deuterium and the tritium quickly engage in thermonuclear fusion, releasing abundant energy, helium-4, and even more free neutrons.
Data for elemental deuterium[edit]
Formula: D2 or 2
• Density: 0.180 kg/m3 at STP (0 °C, 101.325 kPa).
• Atomic weight: 2.0141017926 u.
• Mean abundance in ocean water (from VSMOW) 155.76 ± 0.1 ppm (a ratio of 1 part per approximately 6420 parts), that is, about 0.015% of the atoms in a sample (by number, not weight)
Data at approximately 18 K for D2 (triple point):
• Density:
• Liquid: 162.4 kg/m3
• Gas: 0.452 kg/m3
• Viscosity: 12.6 µPa·s at 300 K (gas phase)
• Specific heat capacity at constant pressure cp:
• Solid: 2,950 J/(kg·K)
• Gas: 5,200 J/(kg·K)
An antideuteron is the antiparticle of the nucleus of deuterium, consisting of an antiproton and an antineutron. The antideuteron was first produced in 1965 at the Proton Synchrotron at CERN[29] and the Alternating Gradient Synchrotron at Brookhaven National Laboratory.[30] A complete atom, with a positron orbiting the nucleus, would be called antideuterium, but as of 2005 antideuterium has not yet been created. The proposed symbol for antideuterium is D, that is, D with an overbar.[31]
See also[edit]
Deuterium is an
isotope of hydrogen
Decay product of:
Decay chain
of deuterium
Decays to:
1. ^ a b c Dan O'Leary "The deeds to deuterium" Nature Chemistry 4, 236 (2012). doi:10.1038/nchem.1273. "Science: Deuterium v. Diplogen". Time. 19 February 1934.
2. ^ a b Hartogh, Paul; Lis, Dariusz C.; Bockelée-Morvan, Dominique; De Val-Borro, Miguel; Biver, Nicolas; Küppers, Michael; Emprechtinger, Martin; Bergin, Edwin A. et al. (2011). "Ocean-like water in the Jupiter-family comet 103P/Hartley 2". Nature 478 (7368): 218–220. Bibcode:2011Natur.478..218H. doi:10.1038/nature10519. PMID 21976024.
3. ^ a b c Hersant, Franck; Gautier, Daniel; Hure, Jean‐Marc (2001). "A Two‐dimensional Model for the Primordial Nebula Constrained by D/H Measurements in the Solar System: Implications for the Formation of Giant Planets". The Astrophysical Journal 554 (1): 391. Bibcode:2001ApJ...554..391H. doi:10.1086/321355. see fig. 7. for a review of D/H ratios in various astronomical objects
4. ^ a b Altwegg, K.; Balsiger, H.; Bar-Nun, A.; Berthelier, J. J.; et. al. (2014). "67P/Churyumov-Gerasimenko, a Jupiter family comet with a high D/H ratio". Science. doi:10.1126/science.1261952. retrieved Dec 12, 2014
5. ^ "§ IR-3.3.2 Provisional Recommendations". Nomenclature of Inorganic Chemistry. Chemical Nomenclature and Structure Representation Division, IUPAC. Retrieved 2007-10-03.
6. ^ Hébrard, G.; Péquignot, D.; Vidal-Madjar, A.; Walsh, J. R.; Ferlet, R. (7 Feb 2000), Detection of deuterium Balmer lines in the Orion Nebula
7. ^ Weiss, Achim. "Equilibrium and change: The physics behind Big Bang Nucleosynthesis". Einstein Online. Retrieved 2007-02-24.
8. ^ IUPAC Commission on Nomenclature of Inorganic Chemistry (2001). "Names for Muonium and Hydrogen Atoms and their Ions" (PDF). Pure and Applied Chemistry 73 (2): 377–380. doi:10.1351/pac200173020377.
9. ^ "Cosmic Detectives". The European Space Agency (ESA). 2 April 2013. Retrieved 2013-04-15.
10. ^ NASA page on FUSE satellite
11. ^ graph of deuterium with distance in our galactic neighborhood See also Linsky, J. L.; Draine, B. T.; Moos, H. W.; Jenkins, E. B.; Wood, B. E.; Oliviera, C.; Blair, W. P.; Friedman, S. D.; Knauth, D.; Lehner, N.; Redfield, S.; Shull, J. M.; Sonneborn, G.; Williger, G. M. (2006). "What is the Total Deuterium Abundance in the Local Galactic Disk?". The Astrophysical Journal 647: 1106. doi:10.1086/505556.
12. ^ Lellouch, E; Bézard, B.; Fouchet, T.; Feuchtgruber, H.; Encrenaz, T.; De Graauw, T. (2001). "The deuterium abundance in Jupiter and Saturn from ISO-SWS observations". Astronomy & Astrophysics 670 (2): 610–622. Bibcode:2001A&A...370..610L. doi:10.1051/0004-6361:20010259.
15. ^ Attila Vertes, ed. (2003). "Physiological effect of heavy water". Elements and isotopes: formation, transformation, distribution. Dordrecht: Kluwer. pp. 111–112. ISBN 978-1-4020-1314-0.
16. ^ Neutron-Proton Scattering. (PDF). Retrieved on 2011-11-23.
17. ^ 2002 CODATA recommended value. Retrieved on 2011-11-23.
19. ^ See neutron cross section#Typical cross sections
20. ^ "Oxygen – Isotopes and Hydrology". SAHRA. Retrieved 2007-09-10.
21. ^ West, Jason B. (2009). Isoscapes: Understanding movement, pattern, and process on Earth through isotope mapping. Springer.
22. ^ Hobson, KA; Van Wilgenburg, SL; Wassenaar, LI; Larson, K (2012). "Linking Hydrogen (δ2H) Isotopes in Feathers and Precipitation: Sources of Variance and Consequences for Assignment to Isoscapes.". PLoS ONE 7 (4): e35137. Bibcode:2012PLoSO...735137H. doi:10.1371/journal.pone.0035137.
23. ^ "NMI3 - Deuteration". NMI3. Retrieved 2012-01-23.
24. ^ Strekalova T., Evans M. et al. (2014). "Deuterium content of water increases depression susceptibility: The potential role of a serotonin-related mechanism.". Behavioural Brain Research 277: 237–44. doi:10.1016/j.bbr.2014.07.039. PMID 25092571.
25. ^ Brickwedde, Ferdinand G. (1982). "Harold Urey and the discovery of deuterium". Physics Today 35 (9): 34. Bibcode:1982PhT....35i..34B. doi:10.1063/1.2915259.
26. ^ Urey, Harold; Brickwedde, F.; Murphy, G. (1932). "A Hydrogen Isotope of Mass 2". Physical Review 39: 164. Bibcode:1932PhRv...39..164U. doi:10.1103/PhysRev.39.164.
27. ^ Sherriff, Lucy (1 June 2007). "Royal Society unearths top secret nuclear research". The Register. Situation Publishing Ltd. Retrieved 2007-06-03.
28. ^ "The Battle for Heavy Water Three physicists' heroic exploits". CERN Bulletin. European Organization for Nuclear Research. 1 April 2002. Retrieved 2007-06-03.
29. ^ Massam, T; Muller, Th.; Righini, B.; Schneegans, M.; Zichichi, A. (1965). "Experimental observation of antideuteron production". Il Nuovo Cimento 39: 10–14. Bibcode:1965NCimS..39...10M. doi:10.1007/BF02814251.
30. ^ Dorfan, D. E; Eades, J.; Lederman, L. M.; Lee, W.; Ting, C. C. (June 1965). "Observation of Antideuterons". Phys. Rev. Lett. 14 (24): 1003–1006. Bibcode:1965PhRvL..14.1003D. doi:10.1103/PhysRevLett.14.1003.
31. ^ Chardonnet, P; Orloff, Jean; Salati, Pierre (1997). "The production of anti-matter in our galaxy". Physics Letters B 409: 313. arXiv:astro-ph/9705110. Bibcode:1997PhLB..409..313C. doi:10.1016/S0370-2693(97)00870-8.
External links[edit] |
34a3b8179fd19df8 | Wednesday, February 22, 2017
Questions related to the quantum aspects of twistorialization
The progress in the understanding of the classical aspects of twistor lift of TGD makes possible to consider in detail the quantum aspects of twistorialization of TGD and for the first time an explicit proposal for the part of scattering diagrams assignable to fundamental fermions emerges.
1. There are several notions of twistor. Twistor space for M4 is T(M4) =M4× S2 (see this) having projections to both M4 and to the standard twistor space T1(M4) often identified as CP3. T(M4)=M4× S2 is necessary for the twistor lift of space-time dynamics. CP2 gives the factor T(CP2)= SU(3)/U(1)× U(1) to the classical twistor space T(H). The quantal twistor space T(M8)= T1(M4)× T(CP2) assignable to momenta. The possible way out is M8-H duality relating the momentum space M8 (isomorphic to the tangent space H) and H by mapping space-time associative and co-associative surfaces in M8 to the surfaces which correspond to the base spaces of in H: they construction would reduce to holomorphy in complete analogy with the original idea of Penrose in the case of massless fields.
2. The standard twistor approach has problems. Twistor Fourier transform reduces to ordinary Fourier transform only in signature (2,2) for Minkowski space: in this case twistor space is real RP3 but can be complexified to CP3. Otherwise the transform requires residue integral to define the transform (in fact, p-adically multiple residue calculus could provide a nice manner to define integrals and could make sense even at space-time level making possible to define action).
Also the positive Grassmannian requires (2,2) signature. In M8-H relies on the existence of the decomposition M2⊂ M2= M2× E2⊂ M8. M2 could even depend on position but M2(x) should define an integrable distribution. There always exists a preferred M2, call it M20, where 8-momentum reduces to light-like M2 momentum. Hence one can apply 2-D variant of twistor approach. Now the signature is (1,1) and spinor basis can be chosen to be real! Twistor space is RP3 allowing complexification to CP3 if light-like complex momenta are allowed as classical TGD suggests!
3. A further problem of the standard twistor approach is that in M4 twistor approach does not work for massive particles. In TGD all particles are massless in 8-D sense. In M8 M4-mass squared corresponds to transversal momentum squared coming from E4⊂ M4× E4 (from CP2 in H). In particular, Dirac action cannot contain anyo mass term since it would break chiral invariance.
Furthermore, the ordinary twistor amplitudes are holomorphic functions of the helicity spinors λi and have no dependence on &lambda tile;i: no information about particle masses! Only the momentum conserving delta function gives the dependence on masses. These amplitudes would define as such the M4 parts of twistor amplitudes for particles massive in TGD sense. The simplest 4-fermion amplitude is unique.
Twistor approach gives excellent hopes about the construction of the scattering amplitudes in ZEO. The construction would split into two pieces corresponding to the orbital degrees of freedom in "world of classical worlds" (WCW) and to spin degrees of freedom in WCW: that is spinors, which correspond to second quantized induced spinor fields at space-time surface (actually string world sheets- either at fundamental level or for effective action implied by strong form of holography (SH)).
1. At WCW level there is a perturbative functional integral over small deformations of the 3-surface to which space-time surface is associated. The strongest assumption is that this 3-surface corresponds to maximum for the real part of action and to a stationary phase for its imaginary part: minimal surface extremal of Kähler action would be in question. A more general but number theoretically problematic option is that an extremal for the sum of Kähler action and volume term is in question.
By Kähler geometry of WCW the functional integral reduces to a sum over contributions from preferred extremals with the fermionic scattering amplitude multiplied by the ration Xi/X, where X=∑i Xi is the sum of the action exponentials for the maxima. The ratios of exponents are however number theoretically problematic.
Number theoretical universality is satisfied if one assigns to each maximum independent zero energy states: with this assumption ∑ Xi reduces to single Xi and the dependence on action exponentials becomes trivial! ZEO allow this. The dependence on coupling parameters of the action essential for the discretized coupling constant evolution is only via boundary conditions at the ends of the space-time surface at the boundaries of CD.
Quantum criticality of TGD demands that the sum over loops associated with the functional integral over WCW vanishes and strong form of holography (SH) suggests that the integral over 4-surfaces reduces to that over string world sheets and partonic 2-surfaces corresponding to preferred extremals for which the WCW coordinates parametrizing them belong to the extension of rationals defining the adele. Also the intersections of the real and various p-adic space-time surfaces belong to this extension.
2. Second piece corresponds to the construction of twistor amplitude from fundamental 4-fermion amplitudes. The diagrams consists of networks of light-like orbits of partonic two surfaces, whose union with the 3-surfaces at the ends of CD is connected and defines a boundary condition for preferred extremals and at the same time the topological scattering diagram.
Fermionic lines correspond to boundaries of string world sheets. Fermion scattering at partonic 2-surfaces at which 3 partonic orbits meet are analogs of 3-vertices in the sense of Feynman and fermions scatter classically. There is no local 4-vertex. This scattering is assumed to be described by simplest 4-fermion twistor diagram. These can be fused to form more complex diagrams. Fermionic lines runs along the partonic orbits defining the topological diagram.
3. Number theoretic universality suggests that scattering amplitudes have interpretation as representations for computations. All space-time surfaces giving rise to the same computation wold be equivalent and tree diagrams corresponds to the simplest computation. If the action exponentials do not appear in the amplitudes as weights this could make sense but would require huge symmetry based on two moves. One could glide the 4-vertex at the end of internal fermion line along the fermion line so that one would eventually get the analog of self energy loop, which should allow snipping away. An argument is developed stating that this symmetry is possible if the preferred M20 for which 8-D momentum reduces to light-like M2-momentum having unique direction is same along entire fermion line, which can wander along the topological graph.
The vanishing of topological loops would correspond to the closedness of the diagrams in what might be called BCFW homology. Boundary operation involves removal of BCFW bridge and entangled removal of fermion pair. The latter operation forces loops. There would be no BCFW bridges and entangled removal should give zero. Indeed, applied to the proposed four fermion vertex entangled removal forces it to correspond to forward scattering for which the proposed twistor amplitude vanishes.
To sum up, the twistorial approach leads to a proposal for an explicit construction of scattering amplitudes for the fundamental fermions. Bosons and fermions as elementary particles are bound states of fundamental fermions assignable to pairs of wormhole contacts carrying fundamental fermions at the throats. Clearly, this description is analogous to a quark level description of hadron. Yangian symmetry with multilocal generators is expected to crucial for the construction of the many-fermion states giving rise to elementary particles. The problems of the standard twistor approach find a nice solution in terms of M8-H duality, 8-D masslessness, and holomorphy of twistor amplitudes in λi and their indepence on &lambda tilde;i.
See the new chapter Some Questions Related to the Twistor Lift of TGD of "Towards M-matrix".
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Monday, February 13, 2017
A new view about color, color confinement, and twistors
To my humble opinion twistor approach to the scattering amplitudes is plagued by some mathematical problems. Whether this is only my personal problem is not clear (notice that this posting is a corrected version of earlier).
1. As Witten shows, the twistor transform is problematic in signature (1,3) for Minkowski space since the the bi-spinor μ playing the role of momentum is complex. Instead of defining the twistor transform as ordinary Fourier integral, one must define it as a residue integral. In signature (2,2) for space-time the problem disappears since the spinors μ can be taken to be real.
2. The twistor Grassmannian approach works also nicely for (2,2) signature, and one ends up with the notion of positive Grassmannians, which are real Grassmannian manifolds. Could it be that something is wrong with the ordinary view about twistorialization rather than only my understanding of it?
3. For M4 the twistor space should be non-compact SU(2,2)/SU(2,1)× U(1) rather than CP3= SU(4)/SU(3)× U(1), which is taken to be. I do not know whether this is only about short-hand notation or a signal about a deeper problem.
4. Twistorilizations does not force SUSY but strongly suggests it. The super-space formalism allows to treat all helicities at the same time and this is very elegant. This however forces Majorana spinors in M4 and breaks fermion number conservation in D=4. LHC does not support N=1 SUSY. Could the interpretation of SUSY be somehow wrong? TGD seems to allow broken SUSY but with separate conservation of baryon and lepton numbers.
In number theoretic vision something rather unexpected emerges and I will propose that this unexpected might allow to solve the above problems and even more, to understand color and even color confinement number theoretically. First of all, a new view about color degrees of freedom emerges at the level of M8.
1. One can always find a decomposition M8=M20× E6 so that the complex light-like quaternionic 8-momentum restricts to M20. The preferred octonionic imaginary unit represent the direction of imaginary part of quaternionic 8-momentum. The action of G2 to this momentum is trivial. Number theoretic color disappears with this choice. For instance, this could take place for hadron but not for partons which have transversal momenta.
2. One can consider also the situation in which one has localized the 8-momenta only to M4 =M20× E2. The distribution for the choices of E2 ⊂ M20× E2=M4 is a wave function in CP2. Octonionic SU(3) partial waves in the space CP2 for the choices for M20× E2 would correspond ot color partial waves in H. The same interpretation is also behind M8-H correspondence.
3. The transversal quaternionic light-like momenta in E2⊂ M20× E2 give rise to a wave function in transversal momenta. Intriguingly, the partons in the quark model of hadrons have only precisely defined longitudinal momenta and only the size scale of transversal momenta can be specified.
The introduction of twistor sphere of T(CP2) allows to describe electroweak charges and brings in CP2 helicity identifiable as em charge giving to the mass squared a contribution proportional to Qem2 so that one could understand electromagnetic mass splitting geometrically.
The physically motivated assumption is that string world sheets at which the data determining the modes of induced spinor fields carry vanishing W fields and also vanishing generalized Kähler form J(M4) +J(CP2). Em charge is the only remaining electroweak degree of freedom. The identification as the helicity assignable to T(CP2) twistor sphere is natural.
4. In general case the M2 component of momentum would be massive and mass would be equal to the mass assignable to the E6 degrees of freedom. One can however always find M20× E6 decomposition in which M2 momentum is light-like. The naive expectation is that the twistorialization in terms of M2 works only if M2 momentum is light-like, possibly in complex sense. This however allows only forward scattering: this is true for complex M2 momenta and even in M4 case.
The twistorial 4-fermion scattering amplitude is however holomorphic in the helicity spinors λi and has no dependence on λtilde;i. Therefore carries no information about M2 mass! Could M2 momenta be allowed to be massive? If so, twistorialization might make sense for massive fermions!
M20 momentum deserves a separate discussion.
1. A sharp localization of 8-momentum to M20 means vanishing E2 momentum so that the action of U(2) would becomes trivial: electroweak degree of freedom would simply disappear, which is not the same thing as having vanishing em charge (wave function in T(CP2) twistorial sphere S2 would be constant). Neither M20 localization nor localization to single M4 (localization in CP2) looks plausible physically - consider only the size scale of CP2. For the generic CP2 spinors this is impossible but covariantly constant right-handed neutrino spinor mode has no electro-weak quantum numbers: this would most naturally mean constant wave function in CP2 twistorial sphere.
For the preferred extremals of twistor lift of TGD either M4 or CP2 twistor sphere can effectively collapse to a point. This would mean disappearence of the degrees of freedom associated with M4 helicity or electroweak quantum numbers.
2. The localization to M4⊃ M20 is possible for the tangent space of quaternionic space-time surface in M8. This could correlate with the fact that neither leptonic nor quark-like induced spinors carry color as a spin like quantum number. Color would emerge only at the level of H and M8 as color partial waves in WCW and would require de-localization in the CP2 cm coordinate for partonic 2-surface. Note that also the integrable local decompositions M4= M2(x)× E2(x) suggested by the general solution ansätze for field equations are possible.
3. Could it be possible to perform a measurement localization the state precisely in fixed M20 always so that the complex momentum is light-like but color degrees of freedom disappear? This does not mean that the state corresponds to color singlet wave function! Can one say that the measurement eliminating color degrees of freedom corresponds to color confinement. Note that the subsystems of the system need not be color singlets since their momenta need not be complex massless momenta in M20. Classically this makes sense in many-sheeted space-time. Colored states would be always partons in color singlet state.
4. At the level of H also leptons carry color partial waves neutralized by Kac-Moody generators, and I have proposed that the pion like bound states of color octet excitations of leptons explain so called lepto-hadrons. Only right-handed covariantly constant neutrino is an exception as the only color singlet fermionic state carrying vanishing 4-momentum and living in all possible M20:s, and might have a special role as a generator of supersymmetry acting on states in all quaternionic subs-spaces M4.
5. Actually, already p-adic mass calculations performed for more than two decades ago forced to seriously consider the possibility that particle momenta correspond to their projections o M20⊂ M4. This choice does not break Poincare invariance if one introduces moduli space for the choices of M20⊂ M4 and the selection of M20 could define quantization axis of energy and spin. If the tips of CD are fixed, they define a preferred time direction assignable to preferred octonionic real unit and the moduli space is just S2. The analog of twistor space at space-time level could be understood as T(M4)=M4× S2 and this one must assume since otherwise the induction of metric does not make sense.
What happens to the twistorialization at the level of M8 if one accepts that only M20 momentum is sharply defined?
1. What happens to the conformal group SO(4,2) and its covering SU(2,2) when M4 is replaced with M20⊂ M8? Translations and special conformational transformation span both 2 dimensions, boosts and scalings define 1-D groups SO(1,1) and R respectively. Clearly, the group is 6-D group SO(2,2) as one might have guessed. Is this the conformal group acting at the level of M8 so that conformal symmetry would be broken? One can of course ask whether the 2-D conformal symmetry extends to conformal symmetries characterized by hyper-complex Virasoro algebra.
2. Sigma matrices are by 2-dimensionality real (σ0 and σ3 - essentially representations of real and imaginary octonionic units) so that spinors can be chosen to be real. Reality is also crucial in signature (2,2), where standard twistor approach works nicely and leads to 3-D real twistor space.
Now the twistor space is replaced with the real variant of SU(2,2)/SU(2,1)× U(1) equal to SO(2,2)/SO(2,1), which is 3-D projective space RP3 - the real variant of twistor space CP3, which leads to the notion of positive Grassmannian: whether the complex Grassmannian really allows the analog of positivity is not clear to me. For complex momenta predicted by TGD one can consider the complexification of this space to CP3 rather than SU(2,2)/SU(2,1)× U(1). For some reason the possible problems associated with the signature of SU(2,2)/SU(2,1)× U(1) are not discussed in literature and people talk always about CP3. Is there a real problem or is this indeed something totally trivial?
3. SUSY is strongly suggested by the twistorial approach. The problem is that this requires Majorana spinors leading to a loss of fermion number conservation. If one has D=2 only effectively, the situation changes. Since spinors in M2 can be chosen to be real, one can have SUSY in this sense without loss of fermion number conservation! As proposed earlier, covariantly constant right-handed neutrino modes could generate the SUSY but it could be also possible to have SUSY generated by all fermionic helicity states. This SUSY would be however broken.
4. The selection of M20 could correspond at space-time level to a localization of spinor modes to string world sheets. Could the condition that the modes of induced spinors at string world sheets are expressible using real spinor basis imply the localization? Whether this localization takes place at fundamental level or only for effective action being due to SH, is a question to be settled. The latter options looks more plausible.
To sum up, these observation suggest a profound re-evalution of the beliefs related to color degrees of freedom, to color confinement, and to what twistors really are.
For details see the new chapter Some Questions Related to the Twistor Lift of TGD of "Towards M-matrix" of "Towards M-matrix" or the article Some questions related to the twistor lift of TGD.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Friday, February 10, 2017
How does the twistorialization at imbedding space level emerge?
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Wednesday, February 08, 2017
Twistor lift and the reduction of field equations and SH to holomorphy
It has become clear that twistorialization has very nice physical consequences. But what is the deep mathematical reason for twistorialization? Understanding this might allow to gain new insights about construction of scattering amplitudes with space-time surface serving as analogs of twistor diatrams.
Penrose's original motivation for twistorilization was to reduce field equations for massless fields to holomorphy conditions for their lifts to the twistor bundle. Very roughly, one can say that the value of massless field in space-time is determined by the values of the twistor lift of the field over the twistor sphere and helicity of the massless modes reduces to cohomology and the values of conformal weights of the field mode so that the description applies to all spins.
I want to find the general solution of field equations associated with the Kähler action lifted to 6-D Kähler action. Also one would like to understand strong form of holography (SH). In TGD fields in space-time are are replaced with the imbedding of space-time as 4-surface to H. Twistor lift imbeds the twistor space of the space-time surface as 6-surface into the product of twistor spaces of M4 and CP2. Following Penrose, these imbeddings should be holomorphic in some sense.
Twistor lift T(H) means that M4 and CP2 are replaced with their 6-D twistor spaces.
1. If S2 for M4 has 2 time-like dimensions one has 3+3 dimensions, and one can speak about hyper-complex variants of holomorphic functions with time-like and space-like coordinate paired for all three hypercomplex coordinates. For the Minkowskian regions of the space-time surface X4 the situation is the same.
2. For T(CP2) Euclidian signature of twistor sphere guarantees this and one has 3 complex coordinates corresponding to those of S2 and CP2. One can also now also pair two real coordinates of S2 with two coordinates of CP2 to get two complex coordinates. For the Euclidian regions of the space-time surface the situation is the same.
Consider now what the general solution could look like. Let us continue to use the shorthand notations S21= S2(X4); S22= S2(CP2);S23= S2(M4).
1. Consider first solution of type (1,0) so that coordinates of S22 are constant. One has holomorphy in hypercomplex sense (light-like coordinate t-z and t+z correspond to hypercomplex coordinates).
1. The general map T(X4) to T(M4) should be holomorphic in hyper-complex sense. S21 is in turn identified with S23 by isometry realized in real coordinates. This could be also seen as holomorphy but with different imaginary unit. One has analytical continuation of the map S21→ S23 to a holomorphic map. Holomorphy might allows to achieve this rather uniquely. The continued coordinates of S21 correspond to the coordinates assignable with the integrable surface defined by E2(x) for local M2(x)× E2(x) decomposition of the local tangent space of X4. Similar condition holds true for T(M4). This leaves only M2(x) as dynamical degrees of freedom. Therefore one has only one holomorphic function defined by 1-D data at the surface determined by the integrable distribution of M2(x) remains. The 1-D data could correspond to the boundary of the string world sheet.
2. The general map T(X4) to T(CP2) cannot satisfy holomorphy in hyper-complex sense. One can however provide the integrable distribution of E2(x) with complex structure and map it holomorphically to CP2. The map is defined by 1-D data.
3. Altogether, 2-D data determine the map determining space-time surface. These two 1-D data correspond to 2-D data given at string world sheet: one would have SH.
2. What about solutions of type (0,1) making sense in Euclidian region of space-time? One has ordinary holomorphy in CP2 sector.
1. The simplest picture is a direct translation of that for Minkowskian regions. The map S21→ S22 is an isometry regarded as an identification of real coordinates but could be also regarded as holomorphy with different imaginary unit. The real coordinates can be analytically continued to complex coordinates on both sides, and their imaginary parts define coordinates for a distribution of transversal Euclidian spaces E22(x) on X4 side and E2(x) on M4 side. This leaves 1-D data.
2. What about the map to T(M4)? It is possible to map the integrable distribution E22(x) to the corresponding distribution for T(M4) holomorphically in the ordinary sense of the word. One has 1-D data. Altogether one has 2-D data and SH and partonic 2-surfaces could carry these data. One has SH again.
3. The above construction works also for the solutions of type (1,1), which might make sense in Euclidian regions of space-time. It is however essential that the spheres S22 and S23 have real coordinates.
SH thus would thus emerge automatically from the twistor lift and holomorphy in the proposed sense.
1. Two possible complex units appear in the process. This suggests a connection with quaternion analytic functions suggested as an alternative manner to solve the field equations. Space-time surface as associative (quaterionic) or co-associate (co-quaternionic) surface is a further solution ansatz.
Also the integrable decompositions M2(x)× E2(x) resp. E21(x)× E22(x) for Minkowskian resp. Euclidian space-time regions are highly suggestive and would correspond to a foliation by string wold sheets and partonic 2-surfaces. This expectation conforms with the number theoretically motivated conjectures.
2. The foliation gives good hopes that the action indeed reduces to an effective action consisting of an area term plus topological magnetic flux term for a suitably chosen stringy 2-surfaces and partonic 2-surfaces. One should understand whether one must choose the string world sheets to be Lagrangian surfaces for the Kähler form including also M4 term. Minimal surface condition could select the Lagrangian string world sheet, which should also carry vanishing classical W fields in order that spinors modes can be eigenstates of em charge.
The points representing intersections of string world sheets with partonic 2-surfaces defining punctures would represent positions of fermions at partonic 2-surfaces at the boundaries of CD and these positions should be able to vary. Should one allow also non-Lagrangian string world sheets or does the space-time surface depend on the choice of the punctures carrying fermion number (quantum classical correspondence)?
3. The alternative option is that any choice produces of the preferred 2-surfaces produces the same scattering amplitudes. Does this mean that the string world sheet area is a constant for the foliation - perhaps too strong a condition - or could the topological flux term compensate for the change of the area?
The selection of string world sheets and partonic 2-surfaces could indeed be also only a gauge choice. I have considered this option earlier and proposed that it reduces to a symmetry identifiable as U(1) gauge symmetry for Kähler function of WCW allowing addition to it of a real part of complex function of WCW complex coordinates to Kähler action. The additional term in the Kähler action would compensate for the change if string world sheet action in SH. For complex Kähler action it could mean the addition of the entire complex function.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Tuesday, February 07, 2017
Mystery: How Was Ancient Mars Warm Enough for Liquid Water?
The article Mars Mystery: How Was Ancient Red Planet Warm Enough for Liquid Water? tells about a mystery related to the ancient presence of water at the surface of Mars. It is now known that the surface of Mars was once covered with rivers, streams, ponds, lakes and perhaps even seas and oceans. This forces to consider the possibility there was once also life in Mars and might be still. There is however a problem. The atmosphere probably contained hundreds of times less carbon dioxide than needed to keep it warm enought for liquid water to last. There are how these signature of flowing water there. Here is one more mystery to resolve.
I proposed around 2014 TGD version of Expanding Earth Hypothesis stating that Earth has experienced a geologically fast expansion period in its past. The radius of the Earth's space-time sheet would have increased by a factor of two from its earlier value. Either p-adic length scale or heff/h=n for the space-time sheet of Earth or both would have increased by factor 2.
This violent event led to the burst of underground seas of Earth to the surface with the consequence that the rather highly developed lifeforms evolved in these reservoirs shielded from cosmic rays and UV radiation burst to the surface: the outcome was what is known as Cambrian explosion. This apparent popping of advanced lifeforms out of nowhere explains why the earlier less developed forms of these complex organisms have not been found as fossile. I have discussed the model for how life could have evolved in underground water reservoirs here.
The geologically fast weakening of the gravitational force by factor 1/4 at surface explains the emergence of gigantic life forms like sauri and even ciant crabs. Continents were formed: before this the crust was like the surface of Mars now. The original motivation of EEH indeed was that the observation that the continents of recent Earth seem to fit nicely together if the radius were smaller by factor 1/2. This is just a step further than Wegener went at his time. The model explains many other difficult to understand facts and forces to give up the Snowball Earth model. The recent view about Earth before Cambrian Explosion is very different from that provided by EEH. The period of rotation of Earth was 4 times shorter than now - 6 hours - and this would be visible of physiology of organisms of that time. Whether it could have left remnants to the physiology and behavior of recently living organisms is an interesting question.
What about Mars? Mars now is very similar to Earth before expansion. The radius is one half of Earth now and therefore same as the radius of Earth before the Cambrian Explosion! Mars is near Earth so that its distance from Sun is not very different. Could also recent Mars contain complex life forms in water reservoirs in its interior. Could Mother Mars (or perhaps Martina, if the red planet is not the masculine warrior but pregnant mother) give rise to their birth? The water that has appeared at the surface of Mars could have been a temporarily leakage. An interesting question is whether the appearance of water might correspond to the same event that increased the radius of Earth by factor two.
Magnetism is important for life in TGD based quantum biology. A possible problem is posed by the very weak recent value of the magnetic field of Mars. The value of the dark magnetic field Bend of Earth deduced from the findings of Blackman about effects of ELF em fields on vertebrate brain has strength, which is 2/5 of the nominal value of BE. Hence the dark MBs of living organisms perhaps integrating to dark MB of Earth seem to be entities distinct from MB of Earth. Could also Mars have dark magnetic fields?
Schumann resonances might be important for collective aspects of consciousness. In the simplest model for Schumann resonances the frequencies are determined solely by the radius of Mars and would be 2 times those in Earth now. The frequency of the lowest Schumann resonance would be 15.6 Hz.
For background see the chapters Expanding Earth Model and Pre-Cambrian Evolution of Continents, Climate, and Life and More Precise TGD Based View about Quantum Biology and Prebiotic Evolution of "Genes and Memes" .
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Monday, February 06, 2017
Chemical qualia as number theoretical qualia?
Certain FB discussions led to a realization that chemical senses (perception of odours and tastes) might actually be or at least include number theoretical sensory qualia providing information about the distribution of Planck constants heff/h=n identifiable as the order of Galois group for the extension of rationals characterizing adeles.
See the article Chemical qualia as number theoretical qualia?.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Thursday, February 02, 2017
Anomaly in neutron lifetime as evidence for the transformation of protons to dark protons
I found a popular article about very interesting finding related to neutron lifetime (see this). Neutron lifetime turns out tobe by about 8 seconds shorter, when measured by looking what fraction of neutrons disappears via decays in box than by measuring the number of protons produced in beta decays for a neutron beam travelling through a given volume. The life time of neutron is about 15 minutes so that relative lifetime difference is about 8/15×60 ≈ .8 per cent. The statistical signficance is 4 sigma: 5 sigma is accepted as the significance for a finding acceptable as discovery.
How could one explain the finding? The difference between the methods is that the beam experiment measures only the disappearences of neutrons via beta decays producing protons whereas box measurement detects the outcome from all possible decay modes. The experiment suggests two alternative explanations.
1. Neutron has some other decay mode or modes, which are not detected in the box method since one measures the number of neutrons in initial and final state. For instance, in TGD framework one could think that the neutrons can transform to dark neutrons with some rate. But it is extremely unprobable that the rate could be just about 1 per cent of the decay rate. Why not 1 millionth? Beta decay must be involved with the process.
Could some fraction of neutrons decay to dark proton, electron, and neutrino: this mode would not be detected in beam experiment? No, if one takes seriously the basic assumption that particles with different value of heff/h= n do not appear in the same vertex. Neutron should first transform to dark proton but then also the disappearance could take place also without the beta decay of dark proton and the discrepancy would be much larger.
2. The proton produced in the ordinary beta decay of proton can however transform to dark proton not detected in the beam experiment! This would automatically predict that the rate is some reasonable fraction of the beta decay rate.
About 1 percent of the resulting protons would transform to dark protons. This makes sense!
What is so nice is that the transformation of protons to dark protons is indeed the basic mechanism of TGD inspired quantum biology! For instance, it would occur in Pollack effect in with irradiation of water bounded by gel phase generates so called exclusion zone, which is negatively charged. TGD explanation is that some fraction of protons transforms to dark protons at magnetic flux tubes outside the system. Negative charge of DNA and cell could be due to this mechanism. One also ends up to a model of genetic code with the analogs of DNA, RNA, tRNA and amino-acids represented as triplets of dark protons. The model predicts correctly the numbers of DNAs coding given amino-acid. Besides biology the model has applications to cold fusion, and various free energy phenomena.
See the article Two different lifetimes for neutron as evidence for dark protons and chapter New Particle Physics Predicted by TGD: Part I.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Why metabolism and what happens in bio-catalysis?
TGD view about dark matter gives also a strong grasp to metabolism and bio-catalysis - the key elements of biology.
Why metabolic energy is needed?
The simplest and at the same time most difficult question that innocent student can make about biology class is simple: "Why we must eat?". Or using more physics oriented language: "Why we must get metabolic energy?". The answer of the teacher might be that we do not eat to get energy but to get order. The stuff that we eat contains ordered energy: we eat order. But order in standard physics is lack of entropy, lack of disorder. Student could get nosy and argue that excretion produces the same outcome as eating but is not enough to survive.
We could go to a deeper level and ask why metabolic energy is needed in biochemistry. Suppose we do this in TGD Universe with dark matter identified as phases characterized by heff/h=n.
1. Why metabolic energy would be needed? Intuitive answer is that evolution requires it and that evolution corresponds to the increase of n=heff/h. To see the answer to the question, notice that the energy scale for the bound states of an atom is proportional to 1/h2 and for dark atom to 1/heff2 ∝ n2 (do not confuse this n with the integer n labelling the states of hydrogen atom!).
2. Dark atoms have smaller binding energies and their creation by a phase transition increasing the value of n demands a feed of energy - metabolic energy! If the metabolic energy feed stops, n is gradually reduced. System gets tired, loses consciousness, and eventually dies. Also in case of cyclotron energies the positive cyclotron energy is proportional to heff so that metabolic energy is needed to generate larger heff and prerequisites for negentropy. In this case one would have very long range negentropic entanglement (NE) whereas dark atoms would correspond to short range NE corresponding to a lower evolutionary level. These entanglements would correspond to gravitational and electromagnetic quantum criticality.
What is remarkable that the scale of atomic binding energies decreases with n only in dimension D=3. In other dimensions it increases and in D=4 one cannot even speak of bound states! This can be easily found by a study of Schrödinger equation for the analog of hydrogen atom in various dimensions. Life based on metabolism seems to make sense only in spatial dimension D=3. Note however that there are also other quantum states than atomic states with different dependence of energy on heff.
3. The analogy of weak form of NMP following from mere adelic physics makes it analogous to second law. Could one consider the purely formal generalization of dE=TdS-.. to dE= -TdN-... where E refers to metabolic energy and N refers to entanglement negentropy? No!: the situation is different. The system is not closed system; N is not the negative of thermodynamical entropy S; and E is the metabolic energy feeded to the system, not the system's internal energy. dE= TdN - ... might however make sense for a system to which metabolic energy is feeded.
Note that the identification of N is still open: N could be identified as N= ∑pNp -S where one has sum of p-adic entanglement negentropies and real entanglement entropy S or as N = ∑pNp. For the first option one would have N=0 for rational entanglement and N>0. for extensions of rationals. Could rational entanglement be interpreted as that associated with dead matter?
4. Bio-catalysis and ATP→ ADP$ process need not require metabolic energy. A transfer of negentropy from nutrients to ATP to acceptor molecule would be in question. Metabolic energy would be needed to reload ADP with negentropy to give ATP by using ATP synthase as a mitochondrial power plant. Metabolites could be carriers of dark atoms of this kind possibly carrying also NE. They could also carry NE associated with the dark cyclotron states as suggested earlier and in this case the value of heff=hgr would be much larger than in the case of dark atoms.
Conditions on bio-catalysis
Bio-catalysis is key mechanism of biology and its extreme efficacy remains to be understood. Enzymes are proteins and ribozymes RNA sequences acting as biocatalysts.
What does catalysis demand?
1. Catalyst and reactants must find each other. How this could happen is very difficult to understand in standard biochemistry in which living matter is seen as soup of biomolecules. I have already already considered the mechanisms making it possible for the reactants to find each other. For instance, in the translation of mRNA to protein tRNA molecules must find their way to mRNA at ribosome. The proposal is that reconnection allowing U-shaped magnetic flux tubes to reconnect to a pair of flux tube connecting mRNA and tRNA molecule and reduction of the value of heff=n× h inducing reduction of the length of magnetic flux tube takes care of this step. This applies also to DNA transcription and DNA replication and bio-chemical reactions in general.
2. Catalyst must provide energy for the reactants (their number is typically two) to overcome the potential wall making the reaction rate very slow for energies around thermal energy. The TGD based model for the hydrino atom having larger binding energy than hydrogen atom claimed by Randell Mills suggests a solution. Some hydrogen atom in catalyst goes from (dark) hydrogen atom state to hydrino state (state with smaller heff/h and liberates the excess binding energy kicking the either reactant over the potential wall so that reaction can process. After the reaction the catalyst returns to the normal state and absorbs the binding energy.
3. In the reaction volume catalyst and reactants must be guided to correct places. The simplest model of catalysis relies on lock-and-key mechanism. The generalized Chladni mechanism forcing the reactants to a two-dimensional closed nodal surface is a natural candidate to consider. There are also additional conditions. For instance, the reactants must have correct orientation. For instance, the reactants must have correct orientation and this could be forced by the interaction with the em field of ME involved with Chladni mechanism.
4. One must have also a coherence of chemical reactions meaning that the reaction can occur in a large volume - say in different cell interiors - simultaneously. Here MB would induce the coherence by using MEs. Chladni mechanism might explain this if there is there is interference of forces caused by periodic standing waves themselves represented as pairs of MEs.
Phase transition reducing the value of heff/h=n as a basic step in bio-catalysis
Hydrogen atom allows also large heff/h=n variants with n>6 with the scale of energy spectrum behaving as (6/n)2 if the n=4 holds true for visible matter. The reduction of n as the flux tube contracts would reduce n and liberate binding energy, which could be used to promote the catalysis.
The notion of high energy phosphate bond is somewhat mysterious concept. There are claims that there is no such bond. I have spent considerable amount of time to ponder this problem. Could phosphate contain (dark) hydrogen atom able to go to the a state with a smaller value of heff/h and liberate the excess binding energy? Could the phosphorylation of acceptor molecule transfer this dark atom associated with the phosphate of ATP to the acceptor molecule? Could the mysterious high energy phosphate bond correspond to the dark atom state. Metabolic energy would be needed to transform ADP to ATP and would generate dark atom.
Could solar light kick atoms into dark states and in this manner store metabolic energy? Could nutrients carry these dark atoms? Could this energy be liberated as the dark atoms return to ordinary states and be used to drive protons against potential gradient through ATP synthase analogous to a turbine of a power plant transforming ADP to ATP and reproducing the dark atom and thus the "high energy phosphate bond" in ATP? Can one see metabolism as transfer of dark atoms? Could possible negentropic entanglement disappear and emerge again after ADP→ATP.
Here it is essential that the energies of the hydrogen atom depend on hbareff=n× h in as hbareffm, m=-2<0. Hydrogen atoms in dimension D have Coulomb potential behaving as 1/rD-2 from Gauss law and the Schrödinger equation predicts for D≠ 4 that the energies satisfy En∝ (heff/h)m, m=2+4/(D-4). For D=4 the formula breaks since in this case the dependence on hbar is not given by power law. m is negative only for D=3 and one has m=-2. There D=3 would be unique dimension in allowing the hydrino-like states making possible bio-catalysis and life in the proposed scenario.
It is also essential that the flux tubes are radial flux tubes in the Coulomb field of charged particle. This makes sense in many-sheeted space-time: electrons would be associated with a pair formed by flux tube and 3-D atom so that only part of electric flux would interact with the electron touching both space-time sheets. This would give the analog of Schrödinger equation in Coulomb potential restricted to the interior of the flux tube. The dimensional analysis for the 1-D Schrödinger equation with Coulomb potential would give also in this case 1/n2 dependence. Same applies to states localized to 2-D sheets with charged ion in the center. This kind of states bring in mind Rydberg states of ordinary atom with large value of n.
The condition that the dark binding energy is above the thermal energy gives a condition on the value of heff/h=n as n≤ 32. The size scale of the dark largest allowed dark atom would be about 100 nm, 10 times the thickness of the cell membrane.
For details see the chapter Quantum criticality and dark matter.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD. |
911d1ef00f4816d8 | Sign up ×
In classical mechanics you construct an action (involving a Lagrangian in arbitrary generalized coordinates, a Hamiltonian in canonical coordinates [to make your EOM more "convenient & symmetric"]), then extremizing it gives the equations of motion. Alternatively one can find a first order PDE for the action as a function of it's endpoints to obtain the Hamilton-Jacobi equation, & the Poisson bracket formulation is merely a means of changing variables in your PDE so as to ensure your new variables are still characteristics of the H-J PDE (i.e. solutions of the EOM - see No. 37). All that makes sense to me, we're extremizing a functional to get the EOM or solving a PDE which implicitly assumes we've already got the solution (path of the particle) inside of the action that leads to the PDE. However in quantum mechanics, at least in the canonical quantization I think, you apparently just take the Hamiltonian (the Lagrangian in canonical coordinates) & mish-mash this with ideas from changing variables in the Hamilton-Jacobi equation representation of your problem so that you ensure the coordinates are characteristics of your Hamilton-Jacobi equation (i.e. the solutions of the EOM), then you put these ideas in some new space for some reason (Hilbert space) & have a theory of QM. Based on what I've written you are literally doing the exact same thing you do in classical mechanics in the beginning, you're sneaking in classical ideas & for some reason you make things into an algebra - I don't see why this is necessary, or why you can't do exactly what you do in classical mechanics??? Furthermore I think my questions have some merit when you note that Schrodinger's original derivation involved an action functional using the Hamilton-Jacobi equation. Again we see Schrodinger doing a similar thing to the modern idea's, here he's mish-mashing the Hamilton-Jacobi equation with extremizing an action functional instead of just extremizing the original Lagrangian or Hamiltonian, analogous to modern QM mish-mashing the Hamiltonian with changes of variables in the H-J PDE (via Poisson brackets).
What's going on in this big Jigsaw? Why do we need to start mixing up all our pieces, why can't we just copy classical mechanics exactly - we are on some level anyway, as far as I can see... I can understand doing these things if they are just convenient tricks, the way you could say that invoking the H-J PDE is just a trick for dealing with Lagrangians & Hamiltonians, but I'm pretty sure the claim is that the process of quantization simply must be done, one step is just absolutely necessary, you simply cannot follow the classical ideas, even though from what I've said we basically are just doing the classical thing - in a roundabout way. It probably has something to do with complex numbers, at least partially, as mentioned in the note on page 276 here, but I have no idea as to how to see that & Schrodinger's original derivation didn't assume them so I'm confused about this.
To make my questions about quantization explicit if they aren't apparent from what I've written above:
a) Why does one need to make an algebra out of mixing the Hamiltonian with Poisson brackets?
(Where this question stresses the interpretation of Hamiltonian's as Lagrangian's just with different coordinates, & Poisson brackets as conditions on changing variables in the Hamilton-Jacobi equation, so that we make the relationship to CM explicit)
b) Why can't quantum mechanics just be modelled by extremizing a Lagrangian, or solving a H-J PDE?
(From my explanation above it seems quantization smuggles these idea's into it's formalism anyway, just mish-mashing them together in some vector space)
c) How do complex numbers relate to this process?
(Are they the reason quantum mechanics radically differs from classical mechanics. If so, how does this fall out of the procedure as inevitable?)
Apologies if these weren't clear from what I've written, but I feel what I've written is absolutely essential to my question.
Edit: Parts b) & c) have been nicely answered, thus part a) is all that remains, & it's solution seems to lie in this article, which derives the time dependent Schrodinger equation (TDSE) from the TISE. In other words, the TISE is apparently derived from classical mechanical principles, as Schrodinger did it, then at some point in the complicated derivation from page 12 on the authors reach a point at which quantum mechanical assumptions become absolutely necessary, & apparently this is the reason one assumes tons of axioms & feels comfortable constructing Hilbert spaces etc... Thus elucidating how this derivation incontravertibly results in quantum mechanical assumptions should justify why quantization is necessary, but I cannot figure this out from my poorly-understood reading of the derivation. Understanding this is the key to QM apparently, unless I'm mistaken (highly probable) thus if anyone can provide an answer in light of this articles contents that would be fantastic, thank you!
share|cite|improve this question
Would you slightly clarify your thread... actually, what is the question? (a) definition of quantization; (b) why do we need quantum mechanics; (c) what is the difference between classical and quantum mechanics; (d) combination of (a),(b),(c) or others. Quantum mechanics differs the classical least action principle, since the operators do not commute. The reason for using the non commuted operators, is simply, it fits experiments. – user26143 Sep 12 '13 at 1:19
Dear bolbteppa: you've been asking extremely probing questions lately so I'd be a fool to think I can grasp exactly what you're driving at in my first reading (I need to come back and read again to let it sink in) but I suggest the following might be helpful: [Johannes answer]( to the Physics SE question Why quantum mechanics and also my answer to Heisenberg picture of QM as a result of Hamilton formalism where I describe how the .... – WetSavannaAnimal aka Rod Vance Sep 12 '13 at 1:25
... Hamilton's equation with Poisson bracket can be "continuously deformed" into the Heisenberg equation of motion with $\hbar$ as the deformation parameter. – WetSavannaAnimal aka Rod Vance Sep 12 '13 at 1:29
b) least action on Lagrangian will give a trajectory, which is inconsistent with the uncertain principle; c) suppose there is only real number, [x,p]=1, take a hermitian conjugation on the commutator, we get [p,x]=1. Therefore it is inconsistent... You could define complex number as a pair of real numbers, but that's essential the same as using complex number... – user26143 Sep 12 '13 at 1:33
Perhaps you would be more comfortable with the path integral formulation of quantum mechanics, wherein the amplitude for a process is the "integral" over all possible paths of $\exp(i S/\hbar)$ where $S$ is the classical action. The classical action principle arises from a saddle-point evaluation of the integral in the $\hbar\to 0$ limit. The best book is still the original "Quantum Mechanics and Path Integrals" by Feynman and Hibbs. Make sure you get the emended edition by Daniel Styer which fixes a lot of typos. – Michael Brown Sep 12 '13 at 2:10
6 Answers 6
Concerning point c), on how complex numbers come into quantum theory:
This has a beautiful conceptual explanation, I think, by applying Lie theory to classical mechanics. The following is taken from what I have written on the nLab at quantization -- Motivation from classical mechanics and Lie theory. See there for more pointers and details:
So to briefly recall, a system of classical mechanics/prequantum mechanics is a phase space, formalized as a symplectic manifold (X,ω). A symplectic manifold is in particular a Poisson manifold, which means that the algebra of functions on phase space X, hence the algebra of classical observables, is canonically equipped with a compatible Lie bracket: the Poisson bracket. This Lie bracket is what controls dynamics in classical mechanics. For instance if H∈C ∞(X) is the function on phase space which is interpreted as assigning to each configuration of the system its energy – the Hamiltonian function – then the Poisson bracket with H yields the infinitesimal time evolution of the system: the differential equation famous as Hamilton's equations.
Something to take notice of here is the infinitesimal nature of the Poisson bracket. Generally, whenever one has a Lie algebra 𝔤, then it is to be regarded as the infinitesimal approximation to a globally defined object, the corresponding Lie group (or generally smooth group) G. One also says that G is a Lie integration of 𝔤 and that 𝔤 is the Lie differentiation of G.
Namely, one finds that the Poisson bracket Lie algebra 𝔭𝔬𝔦𝔰𝔰(X,ω) of the classical observables on phase space is (for X a connected manifold) a Lie algebra extension of the Lie algebra 𝔥𝔞𝔪(X) of Hamiltonian vector fields on X by the line Lie algebra: ℝ⟶𝔭𝔬𝔦𝔰𝔰(X,ω)⟶𝔥𝔞𝔪(X). This means that under Lie integration the Poisson bracket turns into an central extension of the group of Hamiltonian symplectomorphisms of (X,ω). And either it is the fairly trivial non-compact extension by ℝ, or it is the interesting central extension by the circle group U(1). For this non-trivial Lie integration to exist, (X,ω) needs to satisfy a quantization condition which says that it admits a prequantum line bundle. If so, then this U(1)-central extension of the group Ham(X,ω) of Hamiltonian symplectomorphisms exists and is called… the quantomorphism group QuantMorph(X,ω): U(1)⟶QuantMorph(X,ω)⟶Ham(X,ω). While important, for some reason this group is not very well known, which is striking because it contains a small subgroup which is famous in quantum mechanics: the Heisenberg group.
More precisely, whenever (X,ω) itself has a compatible group structure, notably if (X,ω) is just a symplectic vector space (regarded as a group under addition of vectors), then we may ask for the subgroup of the quantomorphism group which covers the (left) action of phase space (X,ω) on itself. This is the corresponding Heisenberg group Heis(X,ω), which in turn is a U(1)-central extension of the group X itself: U(1)⟶Heis(X,ω)⟶X. At this point it is worth pausing for a second to note how the hallmark of quantum mechanics has appeared as if out of nowhere simply by applying Lie integration to the Lie algebraic structures in classical mechanics:
if we think of Lie integrating ℝ to the interesting circle group U(1) instead of to the uninteresting translation group ℝ, then the name of its canonical basis element 1∈ℝ is canonically ”i”, the imaginary unit. Therefore one often writes the above central extension instead as follows: iℝ⟶𝔭𝔬𝔦𝔰𝔰(X,ω)⟶𝔥𝔞𝔪(X,ω) in order to amplify this. But now consider the simple special case where (X,ω)=(ℝ 2,dp∧dq) is the 2-dimensional symplectic vector space which is for instance the phase space of the particle propagating on the line. Then a canonical set of generators for the corresponding Poisson bracket Lie algebra consists of the linear functions p and q of classical mechanics textbook fame, together with the constant function. Under the above Lie theoretic identification, this constant function is the canonical basis element of iℝ, hence purely Lie theoretically it is to be called ”i”.
With this notation then the Poisson bracket, written in the form that makes its Lie integration manifest, indeed reads [q,p]=i. Since the choice of basis element of iℝ is arbitrary, we may rescale here the i by any non-vanishing real number without changing this statement. If we write ”ℏ” for this element, then the Poisson bracket instead reads [q,p]=iℏ. This is of course the hallmark equation for quantum physics, if we interpret ℏ here indeed as Planck's constant. We see it arises here merely by considering the non-trivial (the interesting, the non-simply connected) Lie integration of the Poisson bracket.
The quantomorphism group which is the non-trivial Lie integration of the Poisson bracket is naturally constructed as follows: given the symplectic form ω, it is natural to ask if it is the curvature 2-form of a U(1)-principal connection ∇ on complex line bundle L over X (this is directly analogous to Dirac charge quantization when instead of a symplectic form on phase space we consider the the field strength 2-form of electromagnetism on spacetime). If so, such a connection (L,∇) is called a prequantum line bundle of the phase space (X,ω). The quantomorphism group is simply the automorphism group of the prequantum line bundle, covering diffeomorphisms of the phase space (the Hamiltonian symplectomorphisms mentioned above).
share|cite|improve this answer
This is absolutely stunning stuff, amazing motivation for studying universal covers & lie groups in more depth. On a basic level this ties in exactly to my original question, here you have the 'quantomorphism group' which arises by integrating the lie algebra structure of the Poisson brackets & 'seamlessly' leads to quantum mechanics. Similarly Schrodinger, at least in the time-independent situation, assumes the lie algebra structure in his variational derivation of extremizing the H-J equation & ends up with quantum mechanics. The derivation of the TISE from the TDSE is the last question mark – bolbteppa Sep 12 '13 at 11:07
+1 Nice, but some stuff render as boxes; maybe you should use boxes. – Dimensio1n0 Sep 12 '13 at 16:02
This answer seems interesting: quantum mechanics arising from geometry. However, this is extremely obscure for non-mathematicians like myself. Is there someway you could put it in terms a broader public could understand? – fffred Sep 12 '13 at 22:55
@fffred: The summary statement is: the Poisson bracket which controls classical mechanics is what is called a Lie bracket, and Lie brackets are infinitesimal approximations to smooth symmetry groups. The smooth symmetry group which corresponds to the Poisson bracket Lie algebra contains the Heisenberg group of exponentiated quantum operators. Moreover, this symmetry group is naturally acting on the space of quantum states. This story of what is called "geoemtric quantization" is the key to understanding the universe :-) – Urs Schreiber Sep 12 '13 at 23:18
Thank you for trying to simplify. Unfortunately, this is still too mathematical for me. What do the symmetry groups physically represent? Are you saying that the quantum and classical operators are contained in the same group? Then how do they differ? What makes them geometrically different? Where does quantization appear? Sorry, maybe I just need to learn more math ... – fffred Sep 13 '13 at 0:10
I make my comments into an answer:
In my opinion your confusion arises because you assume that Classical Mechanics is the underlying framework of physics, or at least the tool necessary to describe nature. People who are close to experimental results understand that it is the experimental results which require tools necessary to describe the measurements, formulate a theory and predict new measurements, they do not have this problem. @MichaelBrown 's answer is close to what I mean. It is Classical Mechanics that is derivative to Quantum mechanics and not the other way around. Classical Mechanics emerges from Quantum Mechanics and not the other way around
An example: think of the human body before the discovery of the microscope. There was a "classical" view of what a body was. The experiments could only see and describe macroscopic effects. When the microscope was discovered the theory of cells constituting the human body and the complex functions operating on it of course became the underlying framework, and the old framework a limiting case of this.
That the theories of physics use mathematics as tools is what is confusing you, because mathematics is so elegant. But it is the physical structure we are exploring and not mathematical elegance.
There is a relevant ancient greek myth, that of Procrustis:
If we try to impose the mathematics of classical mechanics on the microscopic data we are using the logic of Procrustis, trying to fit the data to the bed and not find the bed that fits the data.
share|cite|improve this answer
Unfortunately Procrustes does not address any of my three questions, though I will definitely make use of that beautiful metaphor at some point in my life, thank you. – bolbteppa Sep 12 '13 at 3:27
Well, imo it answers the impulse behind your questions. You assume the classical mechanics bed and try to fit quantum mechanical formulations to classical mechanics mathematics, instead of looking how the limit as hbar goes to 0 of quantum mechanical formulations become classical mechanics ones. This means that the set of values/variables on which quantum mechanics operates mathematically is much larger than the set of values/variables classical mechanics does. – anna v Sep 12 '13 at 3:32
It is true that physicists formulating quantum mechanical mathematical tools used the ones they knew from classical mechanics. The framework of finding classical mechanics emerging from quantum mechanics came later, but it exists and cannot but be true, because qm fits experimental results. – anna v Sep 12 '13 at 3:35
You may (?) have located the motivation behind my question, but unfortunately I still don't know why one can't model these problems by extremizing a Lagrangian, in fact I've been told I can't by someone else even though they are comfortable assuming a Hamiltonian exists, where a Hamiltonian is nothing more than a Lagrangian in new coordinates, & Poisson brackets of the H-J PDE corresponding to this apparently non-existent action are weapons in the quantization you tell me is more fundamental than the CM used to construct it in the first place. Any thoughts on any of this? Seems circular to me. – bolbteppa Sep 12 '13 at 3:44
No thoughts of why it cannot be done, though I suspect it will hinge on the value of hbar . "is more fundamental than the CM used to construct it in the first place" . The classical mechanics mathematics was/is a tool as far as QM goes. Like derivatives and integrals. It was not CM that was used but the tools, which were not good enough for the job and were modified. – anna v Sep 12 '13 at 3:51
Concerning point b):
Quantum mechanics can be formulated by extremizing an action and using Hamilton-Lagrange-Jacobi theory.
This is a simple but certainly underappreciated fact: the Schrödinger equation defines a Hamiltonian flow on complex projective space. A quick exposition of this fact was once posted here:
• Scott Morrison, Quantum mechanics and geometry, November 2009 (web post)
More details on this are in
• Abhay Ashtekar, Troy A. Schilling, Geometrical Formulation of Quantum Mechanics (arXiv:gr-qc/9706069)
• L. P. Hughston, Geometry of Stochastic State Vector Reduction, Proceedings of the Royal Society (web)
share|cite|improve this answer
Thanks, I just found this also in Landau & Lifshitz section 20 where they re-frame QM from the point of view of extremizing an action involving complex-valued functions. In L&L the functional they use is $J[q] = \smallint \psi^*(\hat{H} - E)\psi dq$, where $\hat{H}$ & $E$ are interpreted as operators. I find projective spaces too difficult unfortunately, at the moment, but I will definitely come back to this point so thank you. – bolbteppa Sep 12 '13 at 10:47
At least in the time independent scenario it seems one can justify complex functions & there is no necessity for operators if Schrodinger's original derivation holds any weight, thus the functional may be constructed in terms of standard functions as well. The time-dependent case is another story altogether, the article I've linked to holds the answers, I can't find the point at which the derivation of the TDSE completely shakes off classical ideas (which the TISE implicitly assumes, this has to be true based off of Schrodinger's derivation) & incontravertibly incorporates QM, if it exists... – bolbteppa Sep 12 '13 at 10:50
@bolbteppa: am not sure what you mean here and where you are headed. A warning: while there just happen to be these formulations of Schrödinger evolution as Hamiltonian flows, there is no indication that this is more than a curiosity and that it points to something deep about quantum mechanics. Rather, I think is important to face the mathematics of quantum mechanics as what it is. If you care about deep conceptual understanding of quantum mechanics, then look at its best mathematical formulation, which is geometric quantization, see here: – Urs Schreiber Sep 12 '13 at 18:30
i think the fact that indeed quantum mechanics can be formulated as a langrangian extremum (as almost any differential equation, just reverse the process of the Euler-Lagrange differemtial equation and theorem) is already answered well.
Another facet of the "quantization" process is this:
How can we take a "static" relation / equation and transform it into a process.
Difficult? Think of i like this: How can we find the solution to this equation: F(x) = x ?
if direct solving is difficult, one can always use the equation as a "process" (assuming f() function is "Lipschich" )
1. start form an inital x1
2. compute x2 = f(x1)
3. goto to 1 until x1-x2 < epsilon
This made the equation into a process/algorithm, how is this related to quantum mechanics and quantization?
Well quantum mechanics does just this (at a great extent). Takes a "classical" euqaton and makes the "static variables" into "operators" (processes)
So this part of the question can have this answer.
A more ineteresting question is why this works (infact only for certain choice of coordinate systems)?
How did they (the pioneers of quantum mechanics) think of it, is it because it retains the same "classical" relations (probably)?
Can this be generalized or re-cast into sth less confusing?
PS For a further analysis of quantum mechanics and its relations to other processes, see also this other post of mine
share|cite|improve this answer
It seems Schrodinger's original derivation was of the time-independent Schrodinger equation, & in his paper he makes no mention of the time-dependent Schrodinger equation. Thus as far as I can see this process does not apply to the time-dependent version & the problem is apparently insoluble :( A nice discussion of this is given here.
Edit: I'm not sure this is correct anymore, that article I've linked to has made my question immensely more complicated. The article assumes Schrodinger's derivation as valid, & derives the time-dependent SE from it, so apparently all of the reasons why one must assume axioms etc... are justified by that derivation, or made redundant - I have no idea, this is now the central focus of my thread it seems.
share|cite|improve this answer
I've made an edit to this response, one that has radically changed the thrust of my thread & made things immensely more complicated, if anybody has an idea how to deal with it - fantastic! – bolbteppa Sep 12 '13 at 10:07
In classical mechanics the solutions of the equations of motion are the deterministic trajectory of the system. In quantum mechanics if $\Psi$ is the solution of the EOM then $\int _a^b\Psi^*(x)\Psi(x)dx$ is the probability of finding the particle between $a$ and $b$. To have QM you need to supplement the EOM with with this (and hermicity of observables).
share|cite|improve this answer
Your Answer
|
4cc7890db82a1048 | The Uncertainty Principle
First published Mon Oct 8, 2001; substantive revision Mon Jul 3, 2006
Quantum mechanics is generally regarded as the physical theory that is our best candidate for a fundamental and universal description of the physical world. The conceptual framework employed by this theory differs drastically from that of classical physics. Indeed, the transition from classical to quantum physics marks a genuine revolution in our understanding of the physical world.
One striking aspect of the difference between classical and quantum physics is that whereas classical mechanics presupposes that exact simultaneous values can be assigned to all physical quantities, quantum mechanics denies this possibility, the prime example being the position and momentum of a particle. According to quantum mechanics, the more precisely the position (momentum) of a particle is given, the less precisely can one say what its momentum (position) is. This is (a simplistic and preliminary formulation of) the quantum mechanical uncertainty principle for position and momentum. The uncertainty principle played an important role in many discussions on the philosophical implications of quantum mechanics, in particular in discussions on the consistency of the so-called Copenhagen interpretation, the interpretation endorsed by the founding fathers Heisenberg and Bohr.
This should not suggest that the uncertainty principle is the only aspect of the conceptual difference between classical and quantum physics: the implications of quantum mechanics for notions as (non)-locality, entanglement and identity play no less havoc with classical intuitions.
1. Introduction
The uncertainty principle is certainly one of the most famous and important aspects of quantum mechanics. It has often been regarded as the most distinctive feature in which quantum mechanics differs from classical theories of the physical world. Roughly speaking, the uncertainty principle (for position and momentum) states that one cannot assign exact simultaneous values to the position and momentum of a physical system. Rather, these quantities can only be determined with some characteristic ‘uncertainties’ that cannot become arbitrarily small simultaneously. But what is the exact meaning of this principle, and indeed, is it really a principle of quantum mechanics? (In his original work, Heisenberg only speaks of uncertainty relations.) And, in particular, what does it mean to say that a quantity is determined only up to some uncertainty? These are the main questions we will explore in the following, focusing on the views of Heisenberg and Bohr.
The notion of ‘uncertainty’ occurs in several different meanings in the physical literature. It may refer to a lack of knowledge of a quantity by an observer, or to the experimental inaccuracy with which a quantity is measured, or to some ambiguity in the definition of a quantity, or to a statistical spread in an ensemble of similary prepared systems. Also, several different names are used for such uncertainties: inaccuracy, spread, imprecision, indefiniteness, indeterminateness, indeterminacy, latitude, etc. As we shall see, even Heisenberg and Bohr did not decide on a single terminology for quantum mechanical uncertainties. Forestalling a discussion about which name is the most appropriate one in quantum mechanics, we use the name ‘uncertainty principle’ simply because it is the most common one in the literature.
2. Heisenberg
2.1 Heisenberg's road to the uncertainty relations
Heisenberg introduced his now famous relations in an article of 1927, entitled "Ueber den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik". A (partial) translation of this title is: "On the anschaulich content of quantum theoretical kinematics and mechanics". Here, the term anschaulich is particularly notable. Apparently, it is one of those German words that defy an unambiguous translation into other languages. Heisenberg's title is translated as "On the physical content …" by Wheeler and Zurek (1983). His collected works (Heisenberg, 1984) translate it as "On the perceptible content …", while Cassidy's biography of Heisenberg (Cassidy, 1992), refers to the paper as "On the perceptual content …". Literally, the closest translation of the term anschaulich is ‘visualizable’. But, as in most languages, words that make reference to vision are not always intended literally. Seeing is widely used as a metaphor for understanding, especially for immediate understanding. Hence, anschaulich also means ‘intelligible’ or ‘intuitive’.[1]
Why was this issue of the Anschaulichkeit of quantum mechanics such a prominent concern to Heisenberg? This question has already been considered by a number of commentators (Jammer, 1977; Miller 1982; de Regt, 1997; Beller, 1999). For the answer, it turns out, we must go back a little in time. In 1925 Heisenberg had developed the first coherent mathematical formalism for quantum theory (Heisenberg, 1925). His leading idea was that only those quantities that are in principle observable should play a role in the theory, and that all attempts to form a picture of what goes on inside the atom should be avoided. In atomic physics the observational data were obtained from spectroscopy and associated with atomic transitions. Thus, Heisenberg was led to consider the ‘transition quantities’ as the basic ingredients of the theory. Max Born, later that year, realized that the transition quantities obeyed the rules of matrix calculus, a branch of mathematics that was not so well-known then as it is now. In a famous series of papers Heisenberg, Born and Jordan developed this idea into the matrix mechanics version of quantum theory.
Formally, matrix mechanics remains close to classical mechanics. The central idea is that all physical quantities must be represented by infinite self-adjoint matrices (later identified with operators on a Hilbert space). It is postulated that the matrices q and p representing the canonical position and momentum variables of a particle satisfy the so-called canonical commutation rule
qppq = i (1)
where ℏ = h/2π, h denotes Planck's constant, and boldface type is used to represent matrices. The new theory scored spectacular empirical success by encompassing nearly all spectroscopic data known at the time, especially after the concept of the electron spin was included in the theoretical framework.
It came as a big surprise, therefore, when one year later, Erwin Schrödinger presented an alternative theory, that became known as wave mechanics. Schrödinger assumed that an electron in an atom could be represented as an oscillating charge cloud, evolving continuously in space and time according to a wave equation. The discrete frequencies in the atomic spectra were not due to discontinuous transitions (quantum jumps) as in matrix mechanics, but to a resonance phenomenon. Schrödinger also showed that the two theories were equivalent.[2]
Even so, the two approaches differed greatly in interpretation and spirit. Whereas Heisenberg eschewed the use of visualizable pictures, and accepted discontinuous transitions as a primitive notion, Schrödinger claimed as an advantage of his theory that it was anschaulich. In Schrödinger's vocabulary, this meant that the theory represented the observational data by means of continuously evolving causal processes in space and time. He considered this condition of Anschaulichkeit to be an essential requirement on any acceptable physical theory. Schrödinger was not alone in appreciating this aspect of his theory. Many other leading physicists were attracted to wave mechanics for the same reason. For a while, in 1926, before it emerged that wave mechanics had serious problems of its own, Schrödinger's approach seemed to gather more support in the physics community than matrix mechanics.
Understandably, Heisenberg was unhappy about this development. In a letter of 8 June 1926 to Pauli he confessed that "The more I think about the physical part of Schrödinger's theory, the more disgusting I find it", and: "What Schrödinger writes about the Anschaulichkeit of his theory, … I consider Mist (Pauli, 1979, p. 328)". Again, this last German term is translated differently by various commentators: as "junk" (Miller, 1982) "rubbish" (Beller 1999) "crap" (Cassidy, 1992), and perhaps more literally, as "bullshit" (de Regt, 1997). Nevertheless, in published writings, Heisenberg voiced a more balanced opinion. In a paper in Die Naturwissenschaften (1926) he summarized the peculiar situation that the simultaneous development of two competing theories had brought about. Although he argued that Schrödinger's interpretation was untenable, he admitted that matrix mechanics did not provide the Anschaulichkeit which made wave mechanics so attractive. He concluded: "to obtain a contradiction-free anschaulich interpretation, we still lack some essential feature in our image of the structure of matter". The purpose of his 1927 paper was to provide exactly this lacking feature.
2.2 Heisenberg's argument
Let us now look at the argument that led Heisenberg to his uncertainty relations. He started by redefining the notion of Anschaulichkeit. Whereas Schrödinger associated this term with the provision of a causal space-time picture of the phenomena, Heisenberg, by contrast, declared:
We believe we have gained anschaulich understanding of a physical theory, if in all simple cases, we can grasp the experimental consequences qualitatively and see that the theory does not lead to any contradictions. Heisenberg, 1927, p. 172)
His goal was, of course, to show that, in this new sense of the word, matrix mechanics could lay the same claim to Anschaulichkeit as wave mechanics.
To do this, he adopted an operational assumption: terms like ‘the position of a particle’ have meaning only if one specifies a suitable experiment by which ‘the position of a particle’ can be measured. We will call this assumption the ‘measurement=meaning principle’. In general, there is no lack of such experiments, even in the domain of atomic physics. However, experiments are never completely accurate. We should be prepared to accept, therefore, that in general the meaning of these quantities is also determined only up to some characteristic inaccuracy.
This is the first formulation of the uncertainty principle. In its present form it is an epistemological principle, since it limits what we can know about the electron. From "elementary formulae of the Compton effect" Heisenberg estimated the ‘imprecisions’ to be of the order
δpδqh (2)
He continued: “In this circumstance we see the direct anschaulich content of the relation qp − pq = iℏ.”
He went on to consider other experiments, designed to measure other physical quantities and obtained analogous relations for time and energy:
δt δEh (3)
and action J and angle w
δw δJh (4)
which he saw as corresponding to the "well-known" relations
tEEt = iℏ or wJ − Jw = i (5)
However, these generalisations are not as straightforward as Heisenberg suggested. In particular, the status of the time variable in his several illustrations of relation (3) is not at all clear (Hilgevoord 2005). See also on Section 2.5.
Heisenberg summarized his findings in a general conclusion: all concepts used in classical mechanics are also well-defined in the realm of atomic processes. But, as a pure fact of experience ("rein erfahrungsgemäß"), experiments that serve to provide such a definition for one quantity are subject to particular indeterminacies, obeying relations (2)-(4) which prohibit them from providing a simultaneous definition of two canonically conjugate quantities. Note that in this formulation the emphasis has slightly shifted: he now speaks of a limit on the definition of concepts, i.e. not merely on what we can know, but what we can meaningfully say about a particle. Of course, this stronger formulation follows by application of the above measurement=meaning principle: if there are, as Heisenberg claims, no experiments that allow a simultaneous precise measurement of two conjugate quantities, then these quantities are also not simultaneously well-defined.
Heisenberg's paper has an interesting "Addition in proof" mentioning critical remarks by Bohr, who saw the paper only after it had been sent to the publisher. Among other things, Bohr pointed out that in the microscope experiment it is not the change of the momentum of the electron that is important, but rather the circumstance that this change cannot be precisely determined in the same experiment. An improved version of the argument, responding to this objection, is given in Heisenberg's Chicago lectures of 1930.
Here (Heisenberg, 1930, p. 16), it is assumed that the electron is illuminated by light of wavelength λ and that the scattered light enters a microscope with aperture angle ε. According to the laws of classical optics, the accuracy of the microscope depends on both the wave length and the aperture angle; Abbe's criterium for its ‘resolving power’, i.e. the size of the smallest discernable details, gives
δq ∼ λ/sin ε (6)
On the other hand, the direction of a scattered photon, when it enters the microscope, is unknown within the angle ε, rendering the momentum change of the electron uncertain by an amount
δph sin ε/λ (7)
leading again to the result (2).
Let us now analyse Heisenberg's argument in more detail. First note that, even in this improved version, Heisenberg's argument is incomplete. According to Heisenberg's ‘measurement=meaning principle’, one must also specify, in the given context, what the meaning is of the phrase ‘momentum of the electron’, in order to make sense of the claim that this momentum is changed by the position measurement. A solution to this problem can again be found in the Chicago lectures (Heisenberg, 1930, p. 15). Here, he assumes that initially the momentum of the electron is precisely known, e.g. it has been measured in a previous experiment with an inaccuracy δpi, which may be arbitrarily small. Then, its position is measured with inaccuracy δq, and after this, its final momentum is measured with an inaccuracy δpf. All three measurements can be performed with arbitrary precision. Thus, the three quantities δpi, δq, and δpf can be made as small as one wishes. If we assume further that the initial momentum has not changed until the position measurement, we can speak of a definite momentum until the time of the position measurement. Moreover we can give operational meaning to the idea that the momentum is changed during the position measurement: the outcome of the second momentum measurement (say pf) will generally differ from the initial value pi. In fact, one can also show that this change is discontinuous, by varying the time between the three measurements.
Let us now try to see, adopting this more elaborate set-up, if we can complete Heisenberg's argument. We have now been able to give empirical meaning to the ‘change of momentum’ of the electron, pf − pi. Heisenberg's argument claims that the order of magnitude of this change is at least inversely proportional to the inaccuracy of the position measurement:
| pfpi | δqh (8)
However, can we now draw the conclusion that the momentum is only imprecisely defined? Certainly not. Before the position measurement, its value was pi, after the measurement it is pf. One might, perhaps, claim that the value at the very instant of the position measurement is not yet defined, but we could simply settle this by an assignment by convention, e.g., we might assign the mean value (pi + pf)/2 to the momentum at this instant. But then, the momentum is precisely determined at all instants, and Heisenberg's formulation of the uncertainty principle no longer follows. The above attempt of completing Heisenberg's argument thus overshoots its mark.
A solution to this problem can again be found in the Chicago Lectures. Heisenberg admits that position and momentum can be known exactly. He writes:
If the velocity of the electron is at first known, and the position then exactly measured, the position of the electron for times previous to the position measurement may be calculated. For these past times, δpδq is smaller than the usual bound. (Heisenberg 1930, p. 15)
Indeed, Heisenberg says: "the uncertainty relation does not hold for the past".
Apparently, when Heisenberg refers to the uncertainty or imprecision of a quantity, he means that the value of this quantity cannot be given beforehand. In the sequence of measurements we have considered above, the uncertainty in the momentum after the measurement of position has occurred, refers to the idea that the value of the momentum is not fixed just before the final momentum measurement takes place. Once this measurement is performed, and reveals a value pf, the uncertainty relation no longer holds; these values then belong to the past. Clearly, then, Heisenberg is concerned with unpredictability: the point is not that the momentum of a particle changes, due to a position measurement, but rather that it changes by an unpredictable amount. It is, however always possible to measure, and hence define, the size of this change in a subsequent measurement of the final momentum with arbitrary precision.
Although Heisenberg admits that we can consistently attribute values of momentum and position to an electron in the past, he sees little merit in such talk. He points out that these values can never be used as initial conditions in a prediction about the future behavior of the electron, or subjected to experimental verification. Whether or not we grant them physical reality is, as he puts it, a matter of personal taste. Heisenberg's own taste is, of course, to deny their physical reality. For example, he writes, "I believe that one can formulate the emergence of the classical ‘path’ of a particle pregnantly as follows: the ‘path’ comes into being only because we observe it" (Heisenberg, 1927, p. 185). Apparently, in his view, a measurement does not only serve to give meaning to a quantity, it creates a particular value for this quantity. This may be called the ‘measurement=creation’ principle. It is an ontological principle, for it states what is physically real.
This then leads to the following picture. First we measure the momentum of the electron very accurately. By ‘measurement= meaning’, this entails that the term "the momentum of the particle" is now well-defined. Moreover, by the ‘measurement=creation’ principle, we may say that this momentum is physically real. Next, the position is measured with inaccuracy δq. At this instant, the position of the particle becomes well-defined and, again, one can regard this as a physically real attribute of the particle. However, the momentum has now changed by an amount that is unpredictable by an order of magnitude | pf − pi |hq. The meaning and validity of this claim can be verified by a subsequent momentum measurement.
The question is then what status we shall assign to the momentum of the electron just before its final measurement. Is it real? According to Heisenberg it is not. Before the final measurement, the best we can attribute to the electron is some unsharp, or fuzzy momentum. These terms are meant here in an ontological sense, characterizing a real attribute of the electron.
2.3 The interpretation of Heisenberg's relation
The relations Heisenberg had proposed were soon considered to be a cornerstone of the Copenhagen interpretation of quantum mechanics. Just a few months later, Kennard (1927) already called them the "essential core" of the new theory. Taken together with Heisenberg's contention that they provided the intuitive content of the theory and their prominent role in later discussions on the Copenhagen interpretation, a dominant view emerged in which the uncertainty relations were regarded as a fundamental principle of the theory.
The interpretation of these relations has often been debated. Do Heisenberg's relations express restrictions on the experiments we can perform on quantum systems, and, therefore, restrictions on the information we can gather about such systems; or do they express restrictions on the meaning of the concepts we use to describe quantum systems? Or else, are they restrictions of an ontological nature, i.e., do they assert that a quantum system simply does not possess a definite value for its position and momentum at the same time? The difference between these interpretations is partly reflected in the various names by which the relations are known, e.g. as ‘inaccuracy relations’, or: ‘uncertainty’, ‘indeterminacy’ or ‘unsharpness relations’. The debate between these different views has been addressed by many authors, but it has never been settled completely. Let it suffice here to make only two general observations.
First, it is clear that in Heisenberg's own view all the above questions stand or fall together. Indeed, we have seen that he adopted an operational "measurement=meaning" principle according to which the meaningfulness of a physical quantity was equivalent to the existence of an experiment purporting to measure that quantity. Similarly, his "measurement=creation" principle allowed him to attribute physical reality to such quantities. Hence, Heisenberg's discussions moved rather freely and quickly from talk about experimental inaccuracies to epistemological or ontological issues and back again.
However, ontological questions seemed to be of somewhat less interest to him. For example, there is a passage (Heisenberg, 1927, p. 197), where he discusses the idea that, behind our observational data, there might still exist a hidden reality in which quantum systems have definite values for position and momentum, unaffected by the uncertainty relations. He emphatically dismisses this conception as an unfruitful and meaningless speculation, because, as he says, the aim of physics is only to describe observable data. Similarly, in the Chicago Lectures (Heisenberg 1930, p. 11), he warns against the fact that the human language permits the utterance of statements which have no empirical content at all, but nevertheless produce a picture in our imagination. He notes, "One should be especially careful in using the words ‘reality’, ‘actually’, etc., since these words very often lead to statements of the type just mentioned." So, Heisenberg also endorsed an interpretation of his relations as rejecting a reality in which particles have simultaneous definite values for position and momentum.
The second observation is that although for Heisenberg experimental, informational, epistemological and ontological formulations of his relations were, so to say, just different sides of the same coin, this is not so for those who do not share his operational principles or his view on the task of physics. Alternative points of view, in which e.g. the ontological reading of the uncertainty relations is denied, are therefore still viable. The statement, often found in the literature of the thirties, that Heisenberg had proved the impossibility of associating a definite position and momentum to a particle is certainly wrong. But the precise meaning one can coherently attach to Heisenberg's relations depends rather heavily on the interpretation one favors for quantum mechanics as a whole. And because no agreement has been reached on this latter issue, one cannot expect agreement on the meaning of the uncertainty relations either.
2.4 Uncertainty relations or uncertainty principle?
Let us now move to another question about Heisenberg's relations: do they express a principle of quantum theory? Probably the first influential author to call these relations a ‘principle’ was Eddington, who, in his Gifford Lectures of 1928 referred to them as the ‘Principle of Indeterminacy’. In the English literature the name uncertainty principle became most common. It is used both by Condon and Robertson in 1929, and also in the English version of Heisenberg's Chicago Lectures (Heisenberg, 1930), although, remarkably, nowhere in the original German version of the same book (see also Cassidy, 1998). Indeed, Heisenberg never seems to have endorsed the name ‘principle’ for his relations. His favourite terminology was ‘inaccuracy relations’ (Ungenauigkeitsrelationen) or ‘indeterminacy relations’ (Unbestimmtheitsrelationen). We know only one passage, in Heisenberg's own Gifford lectures, delivered in 1955-56 (Heisenberg, 1958, p. 43), where he mentioned that his relations "are usually called relations of uncertainty or principle of indeterminacy". But this can well be read as his yielding to common practice rather than his own preference.
But does the relation (2) qualify as a principle of quantum mechanics? Several authors, foremost Karl Popper (1967), have contested this view. Popper argued that the uncertainty relations cannot be granted the status of a principle on the grounds that they are derivable from the theory, whereas one cannot obtain the theory from the uncertainty relations. (The argument being that one can never derive any equation, say, the Schrödinger equation, or the commutation relation (1), from an inequality.)
Popper's argument is, of course, correct but we think it misses the point. There are many statements in physical theories which are called principles even though they are in fact derivable from other statements in the theory in question. A more appropriate departing point for this issue is not the question of logical priority but rather Einstein's distinction between ‘constructive theories’ and ‘principle theories’.
Einstein proposed this famous classification in (Einstein, 1919). Constructive theories are theories which postulate the existence of simple entities behind the phenomena. They endeavour to reconstruct the phenomena by framing hypotheses about these entities. Principle theories, on the other hand, start from empirical principles, i.e. general statements of empirical regularities, employing no or only a bare minimum of theoretical terms. The purpose is to build up the theory from such principles. That is, one aims to show how these empirical principles provide sufficient conditions for the introduction of further theoretical concepts and structure.
The prime example of a theory of principle is thermodynamics. Here the role of the empirical principles is played by the statements of the impossibility of various kinds of perpetual motion machines. These are regarded as expressions of brute empirical fact, providing the appropriate conditions for the introduction of the concepts of energy and entropy and their properties. (There is a lot to be said about the tenability of this view, but that is not the topic of this entry.)
Now obviously, once the formal thermodynamic theory is built, one can also derive the impossibility of the various kinds of perpetual motion. (They would violate the laws of energy conservation and entropy increase.) But this derivation should not misguide one into thinking that they were no principles of the theory after all. The point is just that empirical principles are statements that do not rely on the theoretical concepts (in this case entropy and energy) for their meaning. They are interpretable independently of these concepts and, further, their validity on the empirical level still provides the physical content of the theory.
A similar example is provided by special relativity, another theory of principle, which Einstein deliberately designed after the ideal of thermodynamics. Here, the empirical principles are the light postulate and the relativity principle. Again, once we have built up the modern theoretical formalism of the theory (the Minkowski space-time) it is straightforward to prove the validity of these principles. But again this does not count as an argument for claiming that they were no principles after all. So the question whether the term ‘principle’ is justified for Heisenberg's relations, should, in our view, be understood as the question whether they are conceived of as empirical principles.
One can easily show that this idea was never far from Heisenberg's intentions. We have already seen that Heisenberg presented the relations as the result of a "pure fact of experience". A few months after his 1927 paper, he wrote a popular paper with the title "Ueber die Grundprincipien der Quantenmechanik" ("On the fundamental principles of quantum mechanics") where he made the point even more clearly. Here Heisenberg described his recent break-through in the interpretation of the theory as follows: "It seems to be a general law of nature that we cannot determine position and velocity simultaneously with arbitrary accuracy". Now actually, and in spite of its title, the paper does not identify or discuss any ‘fundamental principle’ of quantum mechanics. So, it must have seemed obvious to his readers that he intended to claim that the uncertainty relation was a fundamental principle, forced upon us as an empirical law of nature, rather than a result derived from the formalism of the theory.
This reading of Heisenberg's intentions is corroborated by the fact that, even in his 1927 paper, applications of his relation frequently present the conclusion as a matter of principle. For example, he says "In a stationary state of an atom its phase is in principle indeterminate" (Heisenberg, 1927, p. 177, [emphasis added]). Similarly, in a paper of 1928, he described the content of his relations as: "It has turned out that it is in principle impossible to know, to measure the position and velocity of a piece of matter with arbitrary accuracy. (Heisenberg, 1984, p. 26, [emphasis added])"
So, although Heisenberg did not originate the tradition of calling his relations a principle, it is not implausible to attribute the view to him that the uncertainty relations represent an empirical principle that could serve as a foundation of quantum mechanics. In fact, his 1927 paper expressed this desire explicitly: "Surely, one would like to be able to deduce the quantitative laws of quantum mechanics directly from their anschaulich foundations, that is, essentially, relation [(2)]" (ibid, p. 196). This is not to say that Heisenberg was successful in reaching this goal, or that he did not express other opinions on other occasions.
Let us conclude this section with three remarks. First, if the uncertainty relation is to serve as an empirical principle, one might well ask what its direct empirical support is. In Heisenberg's analysis, no such support is mentioned. His arguments concerned thought experiments in which the validity of the theory, at least at a rudimentary level, is implicitly taken for granted. Jammer (1974, p. 82) conducted a literature search for high precision experiments that could seriously test the uncertainty relations and concluded they were still scarce in 1974. Real experimental support for the uncertainty relations in experiments in which the inaccuracies are close to the quantum limit have come about only more recently. (See Kaiser, Werner and George 1983, Uffink 1985, Nairz, Andt, and Zeilinger, 2001.)
A second point is the question whether the theoretical structure or the quantitative laws of quantum theory can indeed be derived on the basis of the uncertainty principle, as Heisenberg wished. Serious attempts to build up quantum theory as a full-fledged Theory of Principle on the basis of the uncertainty principle have never been carried out. Indeed, the most Heisenberg could and did claim in this respect was that the uncertainty relations created "room" (Heisenberg 1927, p. 180) or "freedom" (Heisenberg, 1931, p. 43) for the introduction of some non-classical mode of description of experimental data, not that they uniquely lead to the formalism of quantum mechanics. A serious proposal to construe quantum mechanics as a theory of principle was provided only recently by Bub (2000). But, remarkably, this proposal does not use the uncertainty relation as one of its fundamental principles.
Third, it is remarkable that in his later years Heisenberg put a somewhat different gloss on his relations. In his autobiography Der Teil und das Ganze of 1969 he described how he had found his relations inspired by a remark by Einstein that "it is the theory which decides what one can observe" -- thus giving precedence to theory above experience, rather than the other way around. Some years later he even admitted that his famous discussions of thought experiments were actually trivial since "… if the process of observation itself is subject to the laws of quantum theory, it must be possible to represent its result in the mathematical scheme of this theory" (Heisenberg, 1975, p. 6).
2.5 Mathematical elaboration
When Heisenberg introduced his relation, his argument was based only on qualitative examples. He did not provide a general, exact derivation of his relations.[3] Indeed, he did not even give a definition of the uncertainties δq, etc., occurring in these relations. Of course, this was consistent with the announced goal of that paper, i.e. to provide some qualitative understanding of quantum mechanics for simple experiments.
The first mathematically exact formulation of the uncertainty relations is due to Kennard. He proved in 1927 the theorem that for all normalized state vectors |ψ> the following inequality holds:
Δψp Δψq ≥ ℏ/2 (9)
Here, Δψp and Δψq are standard deviations of position and momentum in the state vector |ψ>, i.e.,
ψp)² = <p²>ψ − (<p>ψ)², (Δψq)² = <q²>ψ − (<q>ψ)². (10)
where <·>ψ = <ψ|·|ψ> denotes the expectation value in state |ψ>. The inequality (9) was generalized in 1929 by Robertson who proved that for all observables (self-adjoint operators) A and B
ΔψA ΔψB ≥ ½|<[A,B]> ψ| (11)
where [A, B] := AB − BA denotes the commutator. This relation was in turn strengthened by Schrödinger (1930), who obtained:
ψA)² (ΔψB)² ≥
¼|<[A,B]> ψ|² + ¼|<{A−<A> ψ, B−<B> ψ}>ψ|²
where {A, B} := (AB + BA) denotes the anti-commutator.
Since the above inequalities have the virtue of being exact and general, in contrast to Heisenberg's original semi-quantitative formulation, it is tempting to regard them as the exact counterpart of Heisenberg's relations (2)-(4). Indeed, such was Heisenberg's own view. In his Chicago Lectures (Heisenberg 1930, pp. 15-19), he presented Kennard's derivation of relation (9) and claimed that "this proof does not differ at all in mathematical content" from the semi-quantitative argument he had presented earlier, the only difference being that now "the proof is carried through exactly".
But it may be useful to point out that both in status and intended role there is a difference between Kennard's inequality and Heisenberg's previous formulation (2). The inequalities discussed in the present section are not statements of empirical fact, but theorems of the quantum mechanical formalism. As such, they presuppose the validity of this formalism, and in particular the commutation relation (1), rather than elucidating its intuitive content or to create ‘room’ or ‘freedom’ for the validity of this relation. At best, one should see the above inequalities as showing that the formalism is consistent with Heisenberg's empirical principle.
This situation is similar to that arising in other theories of principle where, as noted in Section 2.4, one often finds that, next to an empirical principle, the formalism also provides a corresponding theorem. And similarly, this situation should not, by itself, cast doubt on the question whether Heisenberg's relation can be regarded as a principle of quantum mechanics.
There is a second notable difference between (2) and (9). Heisenberg did not give a general definition for the ‘uncertainties’ δp and δq. The most definite remark he made about them was that they could be taken as "something like the mean error". In the discussions of thought experiments, he and Bohr would always quantify uncertainties on a case-to-case basis by choosing some parameters which happened to be relevant to the experiment at hand. By contrast, the inequalities (9)-(12) employ a single specific expression as a measure for ‘uncertainty’: the standard deviation. At the time, this choice was not unnatural, given that this expression is well-known and widely used in error theory and the description of statistical fluctuations. However, there was very little or no discussion of whether this choice was appropriate for a general formulation of the uncertainty relations. A standard deviation reflects the spread or expected fluctuations in a series of measurements of an observable in a given state. It is not at all easy to connect this idea with the concept of the ‘inaccuracy’ of a measurement, such as the resolving power of a microscope. In fact, even though Heisenberg had taken Kennard's inequality as the precise formulation of the uncertainty relation, he and Bohr never relied on standard deviations in their many discussions of thought experiments, and indeed, it has been shown (Uffink and Hilgevoord, 1985; Hilgevoord and Uffink, 1988) that these discussions cannot be framed in terms of standard deviation.
Another problem with the above elaboration is that the ‘well-known’ relations (5) are actually false if energy E and action J are to be positive operators (Jordan 1927). In that case, self-adjoint operators t and w do not exist and inequalities analogous to (9) cannot be derived. Also, these inequalities do not hold for angle and angular momentum (Uffink 1990). These obstacles have led to a quite extensive literature on time-energy and angle-action uncertainty relations (Muga et al. 2002, Hilgevoord 2005).
3. Bohr
In spite of the fact that Heisenberg's and Bohr's views on quantum mechanics are often lumped together as (part of) ‘the Copenhagen interpretation’, there is considerable difference between their views on the uncertainty relations.
3.1 From wave-particle duality to complementarity
Long before the development of modern quantum mechanics, Bohr had been particularly concerned with the problem of particle-wave duality, i.e. the problem that experimental evidence on the behaviour of both light and matter seemed to demand a wave picture in some cases, and a particle picture in others. Yet these pictures are mutually exclusive. Whereas a particle is always localized, the very definition of the notions of wavelength and frequency requires an extension in space and in time. Moreover, the classical particle picture is incompatible with the characteristic phenomenon of interference.
His long struggle with wave-particle duality had prepared him for a radical step when the dispute between matrix and wave mechanics broke out in 1926-27. For the main contestants, Heisenberg and Schrödinger, the issue at stake was which view could claim to provide a single coherent and universal framework for the description of the observational data. The choice was, essentially between a description in terms of continuously evolving waves, or else one of particles undergoing discontinuous quantum jumps. By contrast, Bohr insisted that elements from both views were equally valid and equally needed for an exhaustive description of the data. His way out of the contradiction was to renounce the idea that the pictures refer, in a literal one-to-one correspondence, to physical reality. Instead, the applicability of these pictures was to become dependent on the experimental context. This is the gist of the viewpoint he called ‘complementarity’.
Bohr first conceived the general outline of his complementarity argument in early 1927, during a skiing holiday in Norway, at the same time when Heisenberg wrote his uncertainty paper. When he returned to Copenhagen and found Heisenberg's manuscript, they got into an intense discussion. On the one hand, Bohr was quite enthusiastic about Heisenberg's ideas which seemed to fit wonderfully with his own thinking. Indeed, in his subsequent work, Bohr always presented the uncertainty relations as the symbolic expression of his complementarity viewpoint. On the other hand, he criticized Heisenberg severely for his suggestion that these relations were due to discontinuous changes occurring during a measurement process. Rather, Bohr argued, their proper derivation should start from the indispensability of both particle and wave concepts. He pointed out that the uncertainties in the experiment did not exclusively arise from the discontinuities but also from the fact that in the experiment we need to take into account both the particle theory and the wave theory. It is not so much the unknown disturbance which renders the momentum of the electron uncertain but rather the fact that the position and the momentum of the electron cannot be simultaneously defined in this experiment. (See the "Addition in Proof" to Heisenberg's paper.)
We shall not go too deeply into the matter of Bohr's interpretation of quantum mechanics since we are mostly interested in Bohr's view on the uncertainty principle. For a more detailed discussion of Bohr's philosophy of quantum physics we refer to Scheibe (1973), Folse (1985), Honner (1987) and Murdoch (1987). It may be useful, however, to sketch some of the main points. Central in Bohr's considerations is the language we use in physics. No matter how abstract and subtle the concepts of modern physics may be, they are essentially an extension of our ordinary language and a means to communicate the results of our experiments. These results, obtained under well-defined experimental circumstances, are what Bohr calls the "phenomena". A phenomenon is "the comprehension of the effects observed under given experimental conditions" (Bohr 1939, p. 24), it is the resultant of a physical object, a measuring apparatus and the interaction between them in a concrete experimental situation. The essential difference between classical and quantum physics is that in quantum physics the interaction between the object and the apparatus cannot be made arbitrarily small; the interaction must at least comprise one quantum. This is expressed by Bohr's quantum postulate:
[… the] essence [of the formulation of the quantum theory] may be expressed in the so-called quantum postulate, which attributes to any atomic process an essential discontinuity or rather individuality, completely foreign to classical theories and symbolized by Planck's quantum of action. (Bohr, 1928, p. 580)
A phenomenon, therefore, is an indivisible whole and the result of a measurement cannot be considered as an autonomous manifestation of the object itself independently of the measurement context. The quantum postulate forces upon us a new way of describing physical phenomena:
In this situation, we are faced with the necessity of a radical revision of the foundation for the description and explanation of physical phenomena. Here, it must above all be recognized that, however far quantum effects transcend the scope of classical physical analysis, the account of the experimental arrangement and the record of the observations must always be expressed in common language supplemented with the terminology of classical physics. (Bohr, 1948, p. 313)
This is what Scheibe (1973) has called the "buffer postulate" because it prevents the quantum from penetrating into the classical description: A phenomenon must always be described in classical terms; Planck's constant does not occur in this description.
Together, the two postulates induce the following reasoning. In every phenomenon the interaction between the object and the apparatus comprises at least one quantum. But the description of the phenomenon must use classical notions in which the quantum of action does not occur. Hence, the interaction cannot be analysed in this description. On the other hand, the classical character of the description allows to speak in terms of the object itself. Instead of saying: ‘the interaction between a particle and a photographic plate has resulted in a black spot in a certain place on the plate’, we are allowed to forgo mentioning the apparatus and say: ‘the particle has been found in this place’. The experimental context, rather than changing or disturbing pre-existing properties of the object, defines what can meaningfully be said about the object.
Because the interaction between object and apparatus is left out in our description of the phenomenon, we do not get the whole picture. Yet, any attempt to extend our description by performing the measurement of a different observable quantity of the object, or indeed, on the measurement apparatus, produces a new phenomenon and we are again confronted with the same situation. Because of the unanalyzable interaction in both measurements, the two descriptions cannot, generally, be united into a single picture. They are what Bohr calls complementary descriptions:
[the quantum of action]...forces us to adopt a new mode of description designated as complementary in the sense that any given application of classical concepts precludes the simultaneous use of other classical concepts which in a different connection are equally necessary for the elucidation of the phenomena. (Bohr, 1929, p. 10)
The most important example of complementary descriptions is provided by the measurements of the position and momentum of an object. If one wants to measure the position of the object relative to a given spatial frame of reference, the measuring instrument must be rigidly fixed to the bodies which define the frame of reference. But this implies the impossibility of investigating the exchange of momentum between the object and the instrument and we are cut off from obtaining any information about the momentum of the object. If, on the other hand, one wants to measure the momentum of an object the measuring instrument must be able to move relative to the spatial reference frame. Bohr here assumes that a momentum measurement involves the registration of the recoil of some movable part of the instrument and the use of the law of momentum conservation. The looseness of the part of the instrument with which the object interacts entails that the instrument cannot serve to accurately determine the position of the object. Since a measuring instrument cannot be rigidly fixed to the spatial reference frame and, at the same time, be movable relative to it, the experiments which serve to precisely determine the position and the momentum of an object are mutually exclusive. Of course, in itself, this is not at all typical for quantum mechanics. But, because the interaction between object and instrument during the measurement can neither be neglected nor determined the two measurements cannot be combined. This means that in the description of the object one must choose between the assignment of a precise position or of a precise momentum.
Similar considerations hold with respect to the measurement of time and energy. Just as the spatial coordinate system must be fixed by means of solid bodies so must the time coordinate be fixed by means of unperturbable, synchronised clocks. But it is precisely this requirement which prevents one from taking into account of the exchange of energy with the instrument if this is to serve its purpose. Conversely, any conclusion about the object based on the conservation of energy prevents following its development in time.
The conclusion is that in quantum mechanics we are confronted with a complementarity between two descriptions which are united in the classical mode of description: the space-time description (or coordination) of a process and the description based on the applicability of the dynamical conservation laws. The quantum forces us to give up the classical mode of description (also called the ‘causal’ mode of description by Bohr[4]): it is impossible to form a classical picture of what is going on when radiation interacts with matter as, e.g., in the Compton effect.
Any arrangement suited to study the exchange of energy and momentum between the electron and the photon must involve a latitude in the space-time description sufficient for the definition of wave-number and frequency which enter in the relation [E = hν and p = hσ]. Conversely, any attempt of locating the collision between the photon and the electron more accurately would, on account of the unavoidable interaction with the fixed scales and clocks defining the space-time reference frame, exclude all closer account as regards the balance of momentum and energy. (Bohr, 1949, p. 210)
A causal description of the process cannot be attained; we have to content ourselves with complementary descriptions. "The viewpoint of complementarity may be regarded", according to Bohr, "as a rational generalization of the very ideal of causality".
In addition to complementary descriptions Bohr also talks about complementary phenomena and complementary quantities. Position and momentum, as well as time and energy, are complementary quantities.[5]
We have seen that Bohr's approach to quantum theory puts heavy emphasis on the language used to communicate experimental observations, which, in his opinion, must always remain classical. By comparison, he seemed to put little value on arguments starting from the mathematical formalism of quantum theory. This informal approach is typical of all of Bohr's discussions on the meaning of quantum mechanics. One might say that for Bohr the conceptual clarification of the situation has primary importance while the formalism is only a symbolic representation of this situation.
This is remarkable since, finally, it is the formalism which needs to be interpreted. This neglect of the formalism is one of the reasons why it is so difficult to get a clear understanding of Bohr's interpretation of quantum mechanics and why it has aroused so much controversy. We close this section by citing from an article of 1948 to show how Bohr conceived the role of the formalism of quantum mechanics:
The entire formalism is to be considered as a tool for deriving predictions, of definite or statistical character, as regards information obtainable under experimental conditions described in classical terms and specified by means of parameters entering into the algebraic or differential equations of which the matrices or the wave-functions, respectively, are solutions. These symbols themselves, as is indicated already by the use of imaginary numbers, are not susceptible to pictorial interpretation; and even derived real functions like densities and currents are only to be regarded as expressing the probabilities for the occurrence of individual events observable under well-defined experimental conditions. (Bohr, 1948, p. 314)
3.2 Bohr's view on the uncertainty relations
In his Como lecture, published in 1928, Bohr gave his own version of a derivation of the uncertainty relations between position and momentum and between time and energy. He started from the relations
E = hν and p = h (13)
which connect the notions of energy E and momentum p from the particle picture with those of frequency ν and wavelength λ from the wave picture. He noticed that a wave packet of limited extension in space and time can only be built up by the superposition of a number of elementary waves with a large range of wave numbers and frequencies. Denoting the spatial and temporal extensions of the wave packet by Δx and Δt, and the extensions in the wave number σ := 1/λ and frequency by Δσ and Δν, it follows from Fourier analysis that in the most favorable case Δx Δσ ≈ Δt Δν ≈ 1, and, using (13), one obtains the relations
Δt ΔE ≈ Δx Δph (14)
Note that Δx, Δσ, etc., are not standard deviations but unspecified measures of the size of a wave packet. (The original text has equality signs instead of approximate equality signs, but, since Bohr does not define the spreads exactly the use of approximate equality signs seems more in line with his intentions. Moreover, Bohr himself used approximate equality signs in later presentations.) These equations determine, according to Bohr: "the highest possible accuracy in the definition of the energy and momentum of the individuals associated with the wave field" (Bohr 1928, p. 571). He noted, "This circumstance may be regarded as a simple symbolic expression of the complementary nature of the space-time description and the claims of causality" (ibid).[6] We note a few points about Bohr's view on the uncertainty relations. First of all, Bohr does not refer to discontinuous changes in the relevant quantities during the measurement process. Rather, he emphasizes the possibility of defining these quantities. This view is markedly different from Heisenberg's. A draft version of the Como lecture is even more explicit on the difference between Bohr and Heisenberg:
These reciprocal uncertainty relations were given in a recent paper of Heisenberg as the expression of the statistical element which, due to the feature of discontinuity implied in the quantum postulate, characterizes any interpretation of observations by means of classical concepts. It must be remembered, however, that the uncertainty in question is not simply a consequence of a discontinuous change of energy and momentum say during an interaction between radiation and material particles employed in measuring the space-time coordinates of the individuals. According to the above considerations the question is rather that of the impossibility of defining rigourously such a change when the space-time coordination of the individuals is also considered. (Bohr, 1985 p. 93)
Indeed, Bohr not only rejected Heisenberg's argument that these relations are due to discontinuous disturbances implied by the act of measuring, but also his view that the measurement process creates a definite result:
The unaccustomed features of the situation with which we are confronted in quantum theory necessitate the greatest caution as regard all questions of terminology. Speaking, as it is often done of disturbing a phenomenon by observation, or even of creating physical attributes to objects by measuring processes is liable to be confusing, since all such sentences imply a departure from conventions of basic language which even though it can be practical for the sake of brevity, can never be unambiguous. (Bohr, 1939, p. 24)
Nor did he approve of an epistemological formulation or one in terms of experimental inaccuracies:
[…] a sentence like "we cannot know both the momentum and the position of an atomic object" raises at once questions as to the physical reality of two such attributes of the object, which can be answered only by referring to the mutual exclusive conditions for an unambiguous use of space-time concepts, on the one hand, and dynamical conservation laws on the other hand. (Bohr, 1948, p. 315; also Bohr 1949, p. 211)
It would in particular not be out of place in this connection to warn against a misunderstanding likely to arise when one tries to express the content of Heisenberg's well-known indeterminacy relation by such a statement as ‘the position and momentum of a particle cannot simultaneously be measured with arbitrary accuracy’. According to such a formulation it would appear as though we had to do with some arbitrary renunciation of the measurement of either the one or the other of two well-defined attributes of the object, which would not preclude the possibility of a future theory taking both attributes into account on the lines of the classical physics. (Bohr 1937, p. 292)
Instead, Bohr always stressed that the uncertainty relations are first and foremost an expression of complementarity. This may seem odd since complementarity is a dichotomic relation between two types of description whereas the uncertainty relations allow for intermediate situations between two extremes. They "express" the dichotomy in the sense that if we take the energy and momentum to be perfectly well-defined, symbolically ΔE = Δp = 0, the postion and time variables are completely undefined, Δx = Δt = ∞, and vice versa. But they also allow intermediate situations in which the mentioned uncertainties are all non-zero and finite. This more positive aspect of the uncertainty relation is mentioned in the Como lecture:
At the same time, however, the general character of this relation makes it possible to a certain extent to reconcile the conservation laws with the space-time coordination of observations, the idea of a coincidence of well-defined events in space-time points being replaced by that of unsharply defined individuals within space-time regions. (Bohr 1928, p. 571)
However, Bohr never followed up on this suggestion that we might be able to strike a compromise between the two mutually exclusive modes of description in terms of unsharply defined quantities. Indeed, an attempt to do so, would take the formalism of quantum theory more seriously than the concepts of classical language, and this step Bohr refused to take. Instead, in his later writings he would be content with stating that the uncertainty relations simply defy an unambiguous interpretation in classical terms:
These so-called indeterminacy relations explicitly bear out the limitation of causal analysis, but it is important to recognize that no unambiguous interpretation of such a relation can be given in words suited to describe a situation in which physical attributes are objectified in a classical way. (Bohr, 1948, p.315)
It must here be remembered that even in the indeterminacy relation [Δq Δph] we are dealing with an implication of the formalism which defies unambiguous expression in words suited to describe classical pictures. Thus a sentence like "we cannot know both the momentum and the position of an atomic object" raises at once questions as to the physical reality of two such attributes of the object, which can be answered only by referring to the conditions for an unambiguous use of space-time concepts, on the one hand, and dynamical conservation laws on the other hand. (Bohr, 1949, p. 211)
Finally, on a more formal level, we note that Bohr's derivation does not rely on the commutation relations (1) and (5), but on Fourier analysis. These two approaches are equivalent as far as the relationship between position and momentum is concerned, but this is not so for time and energy since most physical systems do not have a time operator. Indeed, in his discussion with Einstein (Bohr, 1949), Bohr considered time as a simple classical variable. This even holds for his famous discussion of the ‘clock-in-the-box’ thought-experiment where the time, as defined by the clock in the box, is treated from the point of view of classical general relativity. Thus, in an approach based on commutation relations, the position-momentum and time-energy uncertainty relations are not on equal footing, which is contrary to Bohr's approach in terms of Fourier analysis (Hilgevoord 1996 and 1998).
4. The Minimal Interpretation
In the previous two sections we have seen how both Heisenberg and Bohr attributed a far-reaching status to the uncertainty relations. They both argued that these relations place fundamental limits on the applicability of the usual classical concepts. Moreover, they both believed that these limitations were inevitable and forced upon us. However, we have also seen that they reached such conclusions by starting from radical and controversial assumptions. This entails, of course, that their radical conclusions remain unconvincing for those who reject these assumptions. Indeed, the operationalist-positivist viewpoint adopted by these authors has long since lost its appeal among philosophers of physics.
So the question may be asked what alternative views of the uncertainty relations are still viable. Of course, this problem is intimately connected with that of the interpretation of the wave function, and hence of quantum mechanics as a whole. Since there is no consensus about the latter, one cannot expect consensus about the interpretation of the uncertainty relations either. Here we only describe a point of view, which we call the ‘minimal interpretation’, that seems to be shared by both the adherents of the Copenhagen interpretation and of other views.
In quantum mechanics a system is supposed to be described by its quantum state, also called its state vector. Given the state vector, one can derive probability distributions for all the physical quantities pertaining to the system such as its position, momentum, angular momentum, energy, etc. The operational meaning of these probability distributions is that they correspond to the distribution of the values obtained for these quantities in a long series of repetitions of the measurement. More precisely, one imagines a great number of copies of the system under consideration, all prepared in the same way. On each copy the momentum, say, is measured. Generally, the outcomes of these measurements differ and a distribution of outcomes is obtained. The theoretical momentum distribution derived from the quantum state is supposed to coincide with the hypothetical distribution of outcomes obtained in an infinite series of repetitions of the momentum measurement. The same holds, mutatis mutandis, for all the other physical quantities pertaining to the system. Note that no simultaneous measurements of two or more quantities are required in defining the operational meaning of the probability distributions.
Uncertainty relations can be considered as statements about the spreads of the probability distributions of the several physical quantities arising from the same state. For example, the uncertainty relation between the position and momentum of a system may be understood as the statement that the position and momentum distributions cannot both be arbitrarily narrow -- in some sense of the word "narrow" -- in any quantum state. Inequality (9) is an example of such a relation in which the standard deviation is employed as a measure of spread. From this characterization of uncertainty relations follows that a more detailed interpretation of the quantum state than the one given in the previous paragraph is not required to study uncertainty relations as such. In particular, a further ontological or linguistic interpretation of the notion of uncertainty, as limits on the applicability of our concepts given by Heisenberg or Bohr, need not be supposed.
Indeed, this minimal interpretation leaves open whether it makes sense to attribute precise values of position and momentum to an individual system. Some interpretations of quantum mechanics, e.g. those of Heisenberg and Bohr, deny this; while others, e.g. the interpretation of de Broglie and Bohm insist that each individual system has a definite position and momentum (see the entry on Bohmian mechanics). The only requirement is that, as an empirical fact, it is not possible to prepare pure ensembles in which all systems have the same values for these quantities, or ensembles in which the spreads are smaller than allowed by quantum theory. Although interpretations of quantum mechanics, in which each system has a definite value for its position and momentum are still viable, this is not to say that they are without strange features of their own; they do not imply a return to classical physics.
We end with a few remarks on this minimal interpretation. First, it may be noted that the minimal interpretation of the uncertainty relations is little more than filling in the empirical meaning of inequality (9), or an inequality in terms of other measures of width, as obtained from the standard formalism of quantum mechanics. As such, this view shares many of the limitations we have noted above about this inequality. Indeed, it is not straightforward to relate the spread in a statistical distribution of measurement results with the inaccuracy of this measurement, such as, e.g. the resolving power of a microscope. Moreover, the minimal interpretation does not address the question whether one can make simultaneous accurate measurements of position and momentum. As a matter of fact, one can show that the standard formalism of quantum mechanics does not allow such simultaneous measurements. But this is not a consequence of relation (9).
If one feels that statements about inaccuracy of measurement, or the possibility of simultaneous measurements, belong to any satisfactory formulation of the uncertainty principle, the minimal interpretation may thus be too minimal.
• Beller, M. (1999) Quantum Dialogue (Chicago: University of Chicago Press).
• Bohr, N. (1928) ‘The Quantum postulate and the recent development of atomic theory’ Nature (Supplement) 121 580-590. Also in (Bohr, 1934), (Wheeler and Zurek, 1983), and in (Bohr, 1985).
• Bohr, N. (1929) ‘Introductory survey’ in (Bohr, 1934), pp. 1-24.
• Bohr, N. (1934) Atomic Theory and the Description of Nature (Cambridge: Cambridge University Press). Reissued in 1961. Appeared also as Volume I of The Philosophical Writings of Niels Bohr (Woodbridge Connecticut: Ox Bow Press, 1987).
• Bohr, N. (1937) ‘Causality and complementarity’ Philosophy of Science 4 289-298.
• Bohr, N. (1939) ‘The causality problem in atomic physics’ in New Theories in Physics (Paris: International Institute of Intellectual Co-operation.
• Bohr, N. (1939) ‘The causality problem in atomic physics’ in New Theories in Physics (Paris: International Institute of Intellectual Co-operation). Also in (Bohr, 1996), pp. 303-322.
• Bohr, N. (1948) ‘On the notions of causality and complementarity’ Dialectica 2 312-319. Also in (Bohr, 1996) pp. 330-337.
• Bohr, N. (1949) ‘Discussion with Einstein on epistemological problems in atomic physics’ In Albert Einstein: philosopher-scientist. The library of living philosophers Vol. VII, P.A. Schilpp (ed.), (La Salle: Open Court) pp. 201-241.
• Bohr, N. (1985) Collected Works Volume 6, J. Kalckar (ed.) (Amsterdam: North-Holland).
• Bohr, N.(1996) Collected Works Volume 7, J. Kalckar (ed.) (Amsterdam: North-Holland).
• Bub, J. (2000) ‘Quantum mechanics as a principle theory’ Studies in History and Philosophy of Modern Physics 31B 75-94.
• Cassidy, D.C. (1992) Uncertainty, the Life and Science of Werner Heisenberg (New York: Freeman).
• Cassidy, D.C. (1998) ‘Answer to the question: When did the indeterminacy principle become the uncertainty principle?’ American Journal of Physics 66 278-279.
• Condon, E.U. (1929) ‘Remarks on uncertainty principles’ Science 69 573-574.
• Eddington, A. (1928) The Nature of the Physical World, (Cambridge: Cambridge University Press).
• Einstein, A. (1919) ‘My Theory’, The London Times, November 28, p. 13. Reprinted as ‘What is the theory of relativity?’ in Ideas and Opinions (New York: Crown Publishers, 1954) pp. 227-232.
• Folse, H.J. (1985) The Philosophy of Niels Bohr (Amsterdam: Elsevier).
• Heisenberg, W. (1925) ‘Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen’ Zeitschrift für Physik 33 879-893.
• Heisenberg, W. (1926) ‘Quantenmechanik’ Die Naturwissenschaften 14 899-894.
• Heisenberg, W. (1927) ‘Ueber den anschaulichen Inhalt der quantentheoretischen Kinematik and Mechanik’ Zeitschrift für Physik 43 172-198. English translation in (Wheeler and Zurek, 1983), pp. 62-84.
• Heisenberg, W. (1927) ‘Ueber die Grundprincipien der "Quantenmechanik" ‘ Forschungen und Fortschritte 3 83.
• Heisenberg, W. (1928) ‘Erkenntnistheoretische Probleme der modernen Physik’ in (Heisenberg, 1984), pp. 22-28.
• Heisenberg W. (1930) Die Physikalischen Prinzipien der Quantenmechanik (Leipzig: Hirzel). English translation The Physical Principles of Quantum Theory (Chicago: University of Chicago Press, 1930).
• Heisenberg, W. (1931) ‘Die Rolle der Unbestimmtheitsrelationen in der modernen Physik’ Monatshefte für Mathematik und Physik 38 365-372.
• Heisenberg, W. (1958) Physics and Philosophy (New York: Harper).
• Heisenberg, W. (1969) Der Teil und das Ganze (München : Piper).
• Heisenberg, W. (1975) ‘Bemerkungen über die Entstehung der Unbestimmtheitsrelation’ Physikalische Blätter 31 193-196. English translation in (Price and Chissick, 1977).
• Heisenberg W. (1984) Gesammelte Werke Volume C1, W. Blum, H.-P. Dürr and H. Rechenberg (eds) (München: Piper).
• Hilgevoord, J. and Uffink, J. (1988) ‘The mathematical expression of the uncertainty principle’ in Microphysical Reality and Quantum Description, A. van der Merwe et al. (eds.), (Dordrecht: Kluwer) pp. 91-114.
• Hilgevoord, J. and Uffink, J. (1990) ‘ A new view on the uncertainty principle’ In Sixty-Two years of Uncertainty, Historical and Physical Inquiries into the Foundations of Quantum Mechanics, A.E. Miller (ed.), (New York, Plenum) pp. 121-139.
• Hilgevoord, J. (1996) ‘The uncertainty principle for energy and time I’ American Journal of Physics 64, 1451-1456.
• Hilgevoord, J. (1998) ‘The uncertainty principle for energy and time II’ American Journal of Physics 66, 396-402.
• Hilgevoord, J. (2002) ‘Time in quantum mechanics’ American Journal of Physics 70 301-306.
• Hilgevoord, J. (2005) ‘Time in quantum mechanics: a story of confusion. Studies in History and Philosophy of Modern Physics 36 29-60.
• Jammer, M. (1974) The Philosophy of Quantum Mechanics (New York: Wiley).
• Jordan, P. (1927) ‘Über eine neue Begründung der Quantenmechanik II’ Zeitschrift für Physik 44 1-25.
• Kaiser, H., Werner, S.A., and George, E.A. (1983) ‘Direct measurement of the longitudinal coherence length of a thermal neutron beam’ Physical Review Letters 50 560.
• Kennard E.H. (1927) ‘Zur Quantenmechanik einfacher Bewegungstypen’ Zeitschrift für Physik, 44 326-352.
• Miller, A.I. (1982) ‘Redefining Anschaulichkeit’ in: A. Shimony and H.Feshbach (eds) Physics as Natural Philosophy (Cambridge Mass.: MIT Press).
• Muga, J.G., Sala Mayato, R. and Egusquiza, I.L. (Eds.) (2002). Time in quantum mechanics: Berlin: Springer.
• Muller, F.A. (1997) ‘The equivalence myth of quantum mechanics’ Studies in History and Philosophy of Modern Physics 28 35-61, 219-247, ibid. 30 (1999) 543-545.
• Murdoch, D. (1987) Niels Bohr's Philosophy of Physics (Cambridge: Cambridge University Press).
• Nairz, O. Andt, M. and Zeilinger A. (2002) ‘Experimental verification of the Heisenberg uncertainty principle for fullerene molecules’ Physical Review A 65 , 032109.
• Pauli, W. (1979) Wissentschaftlicher Briefwechsel mit Bohr, Einstein, Heisenberg u.a. Volume 1 (1919-1929) A. Hermann, K. von Meyenn and V.F. Weiskopf (eds) (Berlin: Springer).
• Popper, K. (1967) ‘Quantum mechanics without "the observer"’ in M. Bunge (ed.) Quantum Theory and Reality (Berlin: Springer).
• Price, W.C. and Chissick, S.S (eds) (1977) The Uncertainty Principle and the Foundations of Quantum Mechanics, (New York: Wiley).
• Regt, H. de (1997) ‘Erwin Schrödinger, Anschaulichkeit, and quantum theory’ Studies in History and Philosophy of Modern Physics 28 461-481.
• Robertson, H.P. (1929) ‘The uncertainty principle’ Physical Review 34 573-574. Reprinted in Wheeler and Zurek (1983) pp. 127-128.
• Scheibe, E. (1973) The Logical Analysis of Quantum Mechanics (Oxford: Pergamon Press).
• Schrödinger, E. (1930) ‘Zum Heisenbergschen Unschärfeprinzip’ Berliner Berichte 296-303.
• Uffink, J. (1985) ‘Verification of the uncertainty principle in neutron interferometry’ Physics Letters 108 A 59-62.
• Uffink, J. (1990) Measures of Uncertainty and the Uncertainty Principle PhD thesis, University of Utrecht.
• Uffink, J. (1994) ‘The joint measurement problem’ International Journal of Theoretical Physics 33 (1994) 199-212.
• Uffink, J. and Hilgevoord, J. (1985) ‘Uncertainty principle and uncertainty relations’ Foundations of Physics 15 925-944.
• Wheeler, J.A. and Zurek, W.H. (eds) (1983) Quantum Theory and Measurement (Princeton NJ: Princeton University Press).
Other Internet Resources
Copyright © 2006 by
Jan Hilgevoord
Jos Uffink <>
Please Read How You Can Help Keep the Encyclopedia Free
|
3beff5db83e443e8 | Take the 2-minute tour ×
Supppose we are given a quantum state that isn't pure state, such that it is a linear combination of the eigenstates of a Hermitian operator $\hat O$. $$|\psi\rangle=N\sum \alpha_i |i\rangle$$ where $N$ is just a normalization constant. I just want to make something clear about measuring. If we measure the corresponding eigenvalue of $\hat O$ and find it to be in the $|1\rangle$ state then immediately measure again, would I be right in thinking that the probability of getting the $|2\rangle$ state is $0$?
I think that because I believe the wavefunction collapses to the $|1\rangle$ state upon the first measurement...
share|improve this question
Yes, it "collapses", so when you get 1, you can't immediately get a different result 2. However, the collapse isn't material in any sense. You should interpret it in terms of subjective knowledge, the validity of propositions, and conditional propositions. The statement that we can't get 2 immediately after the measurement when we got 1 is the statement that the conditional probability of the result's being 2 given the assumption that it was 1 a split second earlier is zero. – Luboš Motl May 22 '13 at 10:34
Thanks, @LubošMotl ! – valerie v. May 22 '13 at 10:44
2 Answers 2
up vote 0 down vote accepted
If that operator provides a basis for the Hilbert space of your problem, and you actually can is do the linear combination, the wave function collapse is the following statement
$$ |\Psi\rangle = N \sum_{i}\alpha_{i}|i\rangle \Rightarrow |\Psi\rangle_{>}=M \alpha_{j}|j\rangle$$
where $\Rightarrow$ means measurement of the observable linked to $\hat{O}$ and the subscript $>$ means the wave function after that measure. You can see that in the state
$$|\Psi\rangle=M \alpha_{j}|j\rangle $$
the probability of measuring any eigenvalue different from $\alpha_{j}$ is zero, this is, the probability of measuring $\alpha_{j}$ is one. However, due to the nature of the Schrödinger equation, the wave function will evolve into other linear combination after a certain amount of time
share|improve this answer
Thank you, Nivalth! – valerie v. May 22 '13 at 10:42
For simplicity, we could write your state as:
$$|\psi> = \alpha_1\,|1> + \alpha_2\,|2>$$ with $\alpha_1^2 + \alpha_2^2 = 1$ for correct normalisation.
Here $|1>$ and $|2>$ are eingenvectors of your hermitian operator $\hat O$
Be careful, this state $|\psi>$ is a pure state. A pure state is any linear combination of states. The general case, however, is mixed states. You will have to consider the density matrix : if the density matrix is factorisable, that is : if it exists $\psi$ such that $$ \rho = |\psi> <\psi| $$ then the density matrix corresponds to a pure state. But it is only a very particular case.
Going back to your question, you are right: after your measurement, the state will be :
$$|\psi> = |1>$$
So, now the probabily of getting the $|2>$ state is :
$$ |<1|2>|^2 $$
But this probability is zero, because 2 eingenvectors corresponding to 2 different eigenvalues of an Hermitian operator are orthogonal ($ <1|2> = 0 $).
share|improve this answer
Thanks, Trimok! – valerie v. May 22 '13 at 10:43
Your Answer
|
33a32da55d3ac942 | Take the 2-minute tour ×
I am an eighth grader (please remember this!!!) in need of some guidance in my school project on Quantum Mechanics, Theory, and Logic. I am attempting the create a graph of the Schrödinger Equation given the needed variables. To do this, I need to know what all of the variables mean and stand for.
For starters, I get to the point of:
(LaTeX code, reformat if possible please!)
$$\Psi \left( x,t \right)=\frac{-\hbar}{2m}\left( i\frac{p}{\hbar} \right)\left( Ae^{ikx-i\omega t} \right)$$
Where $\hbar$ is the reduced Planck constant. And my guess is that k is kinetic energy of the particle, m is the mass, p is the potential energy, and the Greek w-like variable is the frequency.
What are the other variables?
Also, am I right so far?
share|improve this question
$k$ is the wavenumber: $2\pi/\lambda$. By 'lesser planck constant', do you mean 'reduced planck constant'? In that case the symbol is $\hbar$ \hbar. Also, that's not the schrodinger equation, just a particular solution given some function $u(x)$ for potential, which seems to be constant here. – Manishearth Mar 13 '12 at 14:40
Yes, I meant reduced instead of lesser. And I have no experience in LaTeX, I just created this equation in the Grapher Application that came with my Mac. I am sort of confused with the u(x)... – fr00ty_l00ps Mar 13 '12 at 14:44
I fixed it for you. Anyways, LaTeX (rather MathJax) is down at the moment. $U(x)$ is the potential energy function. Also written as $V(x)$. Could you provide a link to where you got that equation from? Its not the schrodinger equation, rather a specific solution of it. Kind of like how you get a specific solution for $y$ in $x+y=11$ when you substitute a value for $x$. The specific solution is not the whole equation.... – Manishearth Mar 13 '12 at 15:22
Just out of interest, how much Quantum mechanics do you know? It's better to stay away from the schrodinger equation till you know enough calculus as well as general physics. If you want to graph some solutions of it, I would suggest showing electron orbital graphs or something. Also, how are you connecting QM to Theory and Logic? – Manishearth Mar 13 '12 at 15:25
CodeAdmiral: That is a real challenge to have a school project on QM, Theory and Logic. Maybe you can explain a bit want you want to achieve, simply plotting the given equation will look basically like a wave: $f(x)=a*sin(x)$. As Manishearth already pointed out that is not the Schrödinger Equation. – Alexander Mar 13 '12 at 20:17
1 Answer 1
This is just a placeholder answer so that this (answered) question does not go into our unanswered backlog and get bumped up every now and then by this obnoxious fellow known as Community ♦. Please accept this answer.
The equation you've given is not the Schrödinger equation, rather, it is most probably a specific solution of it.
• $k=2\pi/\lambda$ is the (angular) wavenumber, where $\lambda$ is the wavelength
• $\omega$ is (angular) frequency
• $p$ is probably momentum. In the Schrödinger equation, potential energy is usually represented with $U(x)$ or $V(x)$
• $m$ is the mass of the particle
• $A$ is the amplitude of the wave. This itself may be a function of $x$
• $i=\sqrt{-1}$
• $t$ is time
• $\Psi$ is the wavefunction
http://chat.stackexchange.com/transcript/2778 has a full transcript of a discussion which lead to the resolution of the dilemma.
share|improve this answer
Your Answer
|
35db689fd05e70dd | Printer Friendly
The Free Library
22,710,190 articles and books
The Economic Laws of Scientific Research.
Everyone assumes that basic scientific research must be funded by government. Why? Because basic research is presumed to be a "public good" whose benefits are shared by all, and no competitive business can be expected to pay for something whose benefits must be shared. Terence Kealey, a clinical biochemist at Cambridge University Cambridge University, at Cambridge, England, one of the oldest English-language universities in the world. Originating in the early 12th cent. (legend places its origin even earlier than that of Oxford Univ. in England, argues in his provocative new book that what everyone knows is wrong. He contends that non-military science would not only survive without government funding, it would do better than it does now.
Although everyone "knows" that science must be publicly funded, everyone also knows that nearly all the great leaps of classical science - Newtonian physics, relativity, atomic theory Atomic theory
The study of the structure and properties of atoms based on quantum mechanics and the Schrödinger equation. These tools make it possible, in principle, to predict most properties of atomic systems.
Branch of physics that deals with the relationship between electricity and magnetism. Their merger into one concept is tied to three historical events. Hans C.
, genetics, and many others - occurred with no government aid whatever. Few are troubled by this contradiction, because all assume that science and the world have somehow changed in ways that make government science funding essential.
For one thing, we now know how to "do" science in an organized way that is more efficient than the old haphazard method and lends itself to governmental programs. For another, science is so much more expensive now that private funding is insufficient. Third, science is now more necessary to economic competitiveness, so its support is a matter of national urgency. And finally, the "public good" argument: Private sources might fund some basic science, but it won't be enough, so government must take the lead.
Kealey systematically demolishes all these arguments. For example, if science really is necessary for competitiveness, then private industry should be the first to fund it - and it does. Research is essential for success in biomedicine biomedicine /bio·med·i·cine/ (bi?o-med´i-sin) clinical medicine based on the principles of the natural sciences (biology, biochemistry, etc.).biomed´ical
, and Kealey gives several examples showing that private funding is still dominant despite the billions spent by governments on biomedicine here and in other developed countries. And we're not talking about applied research, but basic, published science: Current Contents recently reviewed the institutions that produce the largest numbers of cited papers in biology, and of the top seven, two were private companies, one was a charity, three were private institutions (though these now receive government grants), and only one was a wholly government-founded, government-funded laboratory, the Institut de Chimie Biologique in Strasbourg.
The view that scientific research can be reduced to a routine best administered by a central bureaucracy originated with 16th-century philosopher and political schemer Francis Bacon. Kealey contrasts the "Baconian view" with the free market ideas of Adam Smith. Kealey first defends Smith against Bacon with wonderful historical illustrations. For example, he contrasts the Roman Empire with the "dark ages" that followed its dissolution, and points out that when commerce was under state control under the Romans, technical innovation stagnated. But during the dark ages, when government was weak, all the great inventions that set the stage for the industrial revolution occurred: the saddle, the stirrup stirrup, foot support for the rider of a horse in mounting and while riding. It is a ring with a horizontal bar to receive the foot and is attached by a strap to the saddle. , the horseshoe, the horse collar See: rescue strop. , the tandem harness (the chariot race in Ben Hur Ben Hur
wrongly accused of attempted murder. [Am. Lit.: Ben Hur, Hart, 72]
See : Injustice
would in reality have been a much tamer affair), the water mill, the crank, and several others. In the 19th century, he points out, British science received almost no government support, "yet that did not prevent Britain from growing into the richest and most industrialized in·dus·tri·al·ize
v. in·dus·tri·al·ized, in·dus·tri·al·iz·ing, in·dus·tri·al·iz·es
1. To develop industry in (a country or society, for example).
country in the world, nor from producing scientists such as Davy, Kelvin, Maxwell, Lyell and Darwin. Curiously, 19th-century France and Germany, whose governments did fund science expansively, trailed behind." Kealey wonders, "Can government funding of science Through history, the systems of economic support for scientists and their work have been important determinants of the character and pace of scientific research. The ancient foundations of the sciences were driven by practical and religious concerns and or the pursuit of philosophy more be so important?"
But perhaps science really has changed. Surely something as important and difficult as artificial computation requires government support if it is to succeed? Well, no. The abject failure of the Japanese fifth-generation project (sidelined by the rise of the personal computer) and its equally unsuccessful European twin, ESPRIT, are recent examples. But there is also an older one that has created its own myth: Charles Babbage (person) Charles Babbage - The british inventor known to some as the "Father of Computing" for his contributions to the basic design of the computer through his Analytical Engine. and his difference engine. In 1833 Babbage (the story goes) had this great idea for a mechanical computing device. He tried to get government funding for it and in the end got [pounds]17,000 (a huge amount) but still failed to finish his engine.
Not enough funding, perhaps? Genius thwarted by short-sighted bureaucracy? Apparently not, because two Swedish engineers in fact succeeded by 1853 in building the engine, for much less money. But their business venture failed because the engine was of little more use than conventional mathematical tables. Babbage meanwhile continued to complain because the government would not fund his much more ambitious analytical engine A programmable calculator designed by British scientist Charles Babbage. After his Difference Engine failed its test in 1833, Babbage started the design of the Analytical Engine in 1834. , a forerunner of the digital computer. The device was in fact impractical (mechanical computers are too slow), so (Kealey contends) the British government was quite right to refuse to fund it. The good effect of all this, he writes, was that it "warned successive British administrations off science."
Along the way Kealey demolishes a few other myths. One is that basic science is essential for technological advance. It is sometimes essential - electromagnetism is a case - but often the effects go the other way: Science learns from technology, as in the cases of thermodynamics and steam engines. Another myth is that science is essential for economic growth; technology is, science isn't. And most striking of all, that government supports more science than the private sector would if left to itself. By comparing different countries, Kealey shows that the larger the fraction of civil science supported by government, the smaller the fraction of GDP GDP (guanosine diphosphate): see guanine. devoted to science. Astonishingly a·ston·ish
tr.v. as·ton·ished, as·ton·ish·ing, as·ton·ish·es
, a dollar of public investment in science seems to displace more than a dollar of private investment.
But what about the "public good" argument? Surely an activity like basic science whose benefits must be shared can never be attractive to a private investor. Kealey argues that "[t]he biggest myth in science funding is that published science is freely available." He points out that no one assumes legal knowledge is free just because it is published. Learning about published science has a cost. A company that wishes to keep up with current science must support active scientists. They must be "in the game." Even if your main interest is in copying others, you must support an operation that does some original work if you want to copy effectively. Kealey has other arguments, based on "first mover" vs. "second mover" costs, that help explain why it is that competitive companies will wind up supporting some basic research. Add to this the free availability of charitable funds in a small-government, low-tax economy, and basic science comes out very well without government funding.
Has Kealey put his finger on what's needed for effective science? Is private vs. public funding Public funding is money given from tax revenue or other governmental sources to an individual, organization, or entity. See also
• Public funding of sports venues
• Research funding
• Funding body
the key? There are certainly competing arguments. Yet many working scientists will agree that science is showing signs of stress. Perhaps it is no accident that writer John Horgan John Horgan may refer to:
• John Horgan (Australian politician) - Australian politician, Western Australia MLC;
• John Horgan (Canadian politician) - Canadian politician, British Columbia NDP MLA
• John Horgan (American journalist) - American science journalist
has just made a splash with his book, The End of Science, which argues that all major scientific advances have already been made. Horgan's conclusion is highly unlikely, most scientists think. But he may be responding to a real phenomenon: diminishing innovative returns in science brought about by an increasingly homogeneous funding bureaucracy - what physicist Rustum Roy Rustum Roy (born July 3, 1924) is a materials scientist, science policy analyst, advocate of interdisciplinary education and alternative medicine, and science and religion. in a recent Science editorial called "the world's most inefficient system for funding of research."
In the United States United States, officially United States of America, republic (2005 est. pop. 295,734,000), 3,539,227 sq mi (9,166,598 sq km), North America. The United States is the world's third largest country in population and the fourth largest country in area. , there is really only one "buyer" for science: a handful of government agencies, dominated by the National Institutes of Health. Anyone in the grant application business knows how the current system tends to punish major innovation; small increments are favored. No one who proposes to do something just to satisfy his own curiosity has a prayer of getting funded these days. Review groups have a pejorative pejorative Medtalk Bad…real bad phrase for such efforts: They are called "trust-me" proposals, and they always fail to win funds. Yet this kind of undirected curiosity has been the source of almost all major discoveries.
The problem of missing out on long shots is usually attributed to lack of money. Certainly off-beat proposals have less chance of success when the "pay point" is 10 percent than when it is 50 percent. Yet science funding in most areas is larger now in absolute terms than it has ever been, so the real problem may be elsewhere. Perhaps the solution is not to abolish government funding but to reduce its monopolistic features. Science is a Darwinian process, and if it is to work there must be variation as well as selection. Variation is stifled by monopoly but favored by competition, so perhaps we need more competition not among scientists but among science-funding agencies. Kealey's work alerts us to the possibility that the flaws in the system may not be curable cur·a·ble
Capable of being cured or healed.
by minor modifications.
Terence Kealey has written a compelling and highly readable book that deserves' to be widely debated. To me, as a working scientist with a long history of dealing with the granting and regulating bureaucracy (good people trapped in a bad system), his arguments ring true. Yet almost everything Kealey says opposes the conventional wisdom.
Who's right? It's hard to know - these are complex issues, after all. Unfortunately, defenders of the status quo [Latin, The existing state of things at any given date.] Status quo ante bellum means the state of things before the war. The status quo to be preserved by a preliminary injunction is the last actual, peaceable, uncontested status which preceded the pending controversy. seem reluctant to engage in debate. After describing his own repeated attempts to defend a position similar to Kealey's, physicist Roy recently commented: "I have yet to find one similarly reasoned book or paper replying to these arguments." It's time to hear a detailed refutation ref·u·ta·tion also re·fut·al
1. The act of refuting.
2. Something, such as an argument, that refutes someone or something.
Noun 1.
from those who believe that American science lives or dies by the present system.
John Staddon ( is James B. Duke Professor At Duke University, the title of James B. Duke Professor is given to a small number of the faculty with extraordinary records of achievement. At some universities, titles like "Distinguished Professor," "University Professor," or "Regents Professor" are counterparts of this title of Psychology at Duke University. He has written on the policy implications of science, most recently in The Atlantic Monthly and The Oxford American.
COPYRIGHT 1997 Reason Foundation
Reader Opinion
Article Details
Printer friendly Cite/link Email Feedback
Author:Staddon, John
Article Type:Book Review
Date:Feb 1, 1997
Previous Article:Culture of Honor: The Psychology of Violence in the South.
Next Article:Class warfare: it's soccer moms vs. poor kids - in a rout.
Related Articles
Economics, Bounded Rationality and the Cognitive Revolution.
Sex and Reason.
Machine-Age Ideology: Social Engineering and American Liberalism: 1911-1939.
Instituting Science: The Cultural Production of Scientific Disciplines.
Timothy A. Hacsi, Children as Pawns: The Politics of Educational Reform.
|
723f7fc0c1a3267e | Monster waves blamed for shipping disasters
When the cruise ship Louis Majesty left Barcelona in eastern Spain for Genoa in northern Italy, it was for the leisurely final leg of a hopscotching tour around the Mediterranean. But the Mediterranean had other ideas.
Storm clouds were gathering as the boat ventured eastwards out of the port at around 1pm on March 3, 2010. The sea swell steadily increased during the first hours of the voyage, enough to test those with less-experienced sea legs, but still nothing out of the ordinary.
At 4.20 pm, the ship ran without warning into a wall of water 8 metres or more in height. As far as events can be reconstructed, the boat’s pitch as it descended the wave’s lee tilted it into a second, and possibly a third, monster wave immediately behind.
Water smashed through the windows of a lounge on deck 5, almost 17 metres above the ship’s water line. Two passengers were killed instantly and 14 more injured.
Then, as suddenly as the waves had appeared, they were gone. The boat turned and limped back to Barcelona.
A few decades ago, rogue waves of the sort that hit the Louis Majesty were the stuff of salty sea dogs’ legends. No more. Real-world observations, backed up by improved theory and lab experiments, leave no doubt any more that monster waves happen – and not infrequently. The question has become: can we predict when and where they will occur?
Science has been slow to catch up with rogue waves. There is not even any universally accepted definition. One with wide currency is that a rogue is at least double the significant wave height, itself defined as the average height of the tallest third of waves in any given region.
What this amounts to is a little dependent on context: on a calm sea with significant waves 10 centimetres tall, a wave of 20 centimetres might be deemed a rogue.
If that seems a little lackadaisical, for a long time the models oceanographers used to predict wave heights suggested anomalously tall waves barely existed. These models rested on the principle of linear superposition: that when two trains of waves meet, the heights of the peaks and troughs at each point simply sum.
It was only in the late 1960s that Thomas Brooke Benjamin and J.E. Feir of the University of Cambridge spotted an instability in the underlying mathematics. When longer-wavelength waves catch up with shorter-wavelength ones, all the energy of a wave train can become abruptly concentrated in a few monster waves – or just one. Longer waves travel faster in the deep ocean, so this is a perfectly plausible real-world scenario.
The pair went on to test the theory in a then state-of-the-art, 400-metre-long towing tank, complete with wave-maker, at the a UK National Physical Laboratory facility on the outskirts of London.
Near the wave-maker, which perturbed the water at varying speeds, the waves were uniform and civil. But about 60 metres on they became distorted, forming into short-lived, larger waves that we would now call rogues (though to avoid unwarranted splashing, the initial waves were just a few centimetres tall).
We now know that rogue waves can arise in every ocean. That casts historical accounts in a new lightand rogue waves are thought to have had a part in the unexplained losses of some 200 cargo vessels in the two decades preceding 2004.
It took a while for this new intelligence to trickle through. “Waves become unstable and can concentrate energy on their own,” says Takuji Waseda, an oceanographer at the University of Tokyo in Japan. “But for a long time, people thought this was a theoretical thing that does not exist in the real oceans.”
Theory and observation finally crashed together in 1995 in the North Sea, about 150 kilometres off the coast of Norway. New Year’s Day that year was tumultuous around the Draupner sea platform, with a significant wave height of 12 metres.
At around 3.20pm, however, accelerometers and strain sensors mounted on the platform registered a single wave towering 26 metres over its surrounding troughs. According to the prevailing wisdom, this was a once-in-10,000-year occurrence.
The Draupner wave ushered in a new era of rogue-wave science, says physicist Ira Didenkulova at Tallinn University of Technology in Estonia. In 2000, the European Union initiated the three-year MaxWave project. During a three-week stretch early in 2003, it used boat-based radar and satellite data to scan the world’s oceans for giant waves, turning up 10 that were 25 metres or more tall.
We now know that rogue waves can arise in every ocean. The North Atlantic, the Drake Passage between Antarctica and the southern tip of South America, and the waters off the southern coast of South Africa are particularly prone. Rogues possibly also occur in some large freshwater bodies such as the Great Lakes of North America.
That casts historical accounts in a new light and rogue waves are now thought to have had a part in the unexplained losses of some 200 cargo vessels in the two decades preceding 2004.
So rogue waves exist, but what makes one in the real world?
Miguel Onorato at the University of Torino, Italy, has spent more than a decade trying to answer that question.
His tool is the non-linear Schrödinger equation, which has long been used to second-guess unpredictable situations in both classical and quantum physics. Onorato uses it to build computer simulations and guide wave-tank experiments in an attempt to coax rogues from ripples.
Gradually, Onorato and others are building up a catalogue of real-world rogue-generating situations. One is when a storm swell runs into a powerful current going the other way. This is often the case along the North Atlantic’s Gulf Stream, or where sea swells run counter to the Agulhas current off South Africa. Another is a “crossing sea”, in which two wave systems – often one generated by local winds and a sea swell from further afield – converge from different directions and create instabilities.
Crossing seas have long been a suspect. A 2005 analysis used data from the maritime information service Lloyd’s List Intelligence to show that, depending on the precise definition, up to half of ship accidents chalked up to bad weather occur in crossing seas.
In 2011, the finger was pointed at a crossing sea in the Draupner incident, and Onorato thinks it might also have been the Louis Majesty’s downfall. When he and his team fed wind and wave data into his model to “hindcast” the state of the sea in the area at the time, it indicated that two wave trains were converging on the ship, one from a north-easterly direction and one more from the south-east, separated by an angle of between 40 and 60 degrees.
Simpler situations might generate rogues, too. Last year, Waseda revisited an incident in December 1980 when a cargo carrier loaded with coal lost its entire bow to a monster wave with an estimated height of 20 metres in the “Dragon’s Triangle”, a region of the Pacific south of Japan that is notorious for accidents.
A Japanese government investigation had blamed a crossing sea, but when Waseda used a more sophisticated wave model to hindcast the conditions, he found it likely that a strong gale had poured energy into a single wave system far larger than conventional models allowed.
He thinks such single-system rogues could account for other accidents, too – and that
the models need further updating. “We used to think ocean waves could be described simply, but it turns out they’re changing at
the same pace and same time scale as the wind, which changes rapidly,” he says.
In 2012, Onorato and others showed that the models even allow for the possibility of “super rogues” towering as much as 11 times the height of the surrounding seas, a possibility since borne out in water-tank experiments.
With climate change potentially whipping up more intense storms, such theoretical possibilities are becoming a serious practical concern. From 2009 to 2013, the EU funded a project called Extreme Seas, which brought shipbuilders together with academic researchers including Onorato, with the aim of producing boats with hulls designed to withstand rogue waves.
That is a high-cost, long-term solution, however. The best defence remains simply knowing when a rogue wave is likely to strike. “We can at least warn that sea states are rapidly changing, possibly in a dangerous direction,” says Waseda.
Various indices have been developed that aim to convert raw satellite and sea-state data into this sort of warning. One of the most widely used is the Benjamin-Feir index, named after the two pioneers of rogue-wave research. Formulated in 2003 by Peter Janssen of the European Centre for Medium-Range Weather Forecasts in Reading, UK, it is calculated for sea squares 20 kilometres by 20 kilometres, and is now incorporated into the centre’s twice-daily sea forecasts.
“Ship routing officers use it as an indicator to see whether they should go through a particular area,” says Janssen.
The ultimate aim would be to allow ships to do that themselves. Most large ocean-going ships now carry wide-sweeping sensors that determine the heights of waves by analysing radar echoes.
Computer software can turn those radar measurements into a three-dimensional map of the sea state, showing the size and motions of the surrounding swell.
It would be a relatively small step to include software that can flag up indicators of a sea about to go rogue, such as quickly changing winds or crossing seas. Such a system might let crew and passengers avoid at-risk areas of a ship.
The main bar to that happening is computing power: existing models can’t quite crunch through all the fast-moving fluctuations of the ocean rapidly enough to generate fine-grained warnings in real time.
For Waseda, the answer is to develop a central early warning system, such as those that operate for tsunamis and tropical storms, to inform ships about to leave port. Thanks to our advances in understanding a phenomenon whose existence was doubted only decades ago, there is no reason now why we can’t do that for rogue waves, says Waseda.
“At this point it’s not a shortage of theory, but a shortage of communication.”
- New Scientist
Seven giants
In 2007, Paul Liu at the US National Oceanic and Atmospheric Administration compiled a catalogue of more than 50 historical incidents probably associated with rogue waves. Here are some of the most significant:
1498 Columbus recounts how, on his third expedition to the Americas, a giant wave lifts up his boats during the night as they pass through a strait near Trinidad. Supposedly using Columbus’s words, to this day this area of sea is called the Bocas del Dragón – the Mouths of the Dragon.
1853 The Annie Jane, a ship carrying 500 emigrants from England to Canada, is hit. Only about 100 make it to shore alive, to Vatersay, an island in Scotland’s Outer Hebrides.
1884 A rogue wave off West Africa sinks the Mignonette, a yacht sailing from England to Australia. The crew of four escape in a dinghy. After 19 days adrift, the captain kills the teenage cabin boy to provide food for the other three survivors.
1909 The steamship SS Waratah disappears without trace with over 200 people on board off the coast of South Africa – a swathe of sea now known for its high incidence of rogue waves.
1943 Two monster waves in quick succession pummel the Queen Elizabeth cruise liner as it crosses the North Atlantic, breaking windows 28 metres above the waterline.
1978 The German merchant navy supertanker MS München disappears in the stormy North Atlantic en route from Bremerhaven to Savannah, Georgia, leaving only a scattering of life rafts and emergency buoys.
2001 Just days apart, two cruise ships – the Bremen and the Caledonian Star – have their bridge windows smashed by waves estimated to be 30 metres tall in the South Atlantic.
- New Scientist |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.