chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
4953d4d2bb2bea84 | Standing With Israel
Want to receive Standing With Israel by email? Sign up here
Arieh Warshel
University of Southern California professor Arieh Warshel at an Oct. 9 press conference for his Nobel Prize in chemistry. (USC Photo/Gus Ruelas)
What is behind Israel’s recent string of Nobel Prize winners?
It could be that Israelis have a practical way of thinking and strong strategies for solving difficult problems, says Arieh Warshel, who earlier this month was awarded the Nobel Prize in chemistry for his role in developing computer programs that simulate “large and complex chemical systems and reactions.”
An Israeli-American professor at the University of Southern California in Los Angeles, Warshel focused on enzymatic reactions within an all-Jewish team of three researchers sharing the prize. His fellow winners are colleagues Michael Levitt, a professor at the Stanford University School of Medicine who holds Israeli, British and American citizenships; and Martin Karplus, a professor at Harvard University and the University of Strasbourg who holds American and Austrian citizenships.
Warshel and Levitt join a long list of recent Israeli Nobel laureates, particularly in chemistry. Prof. Dan Shechtman of the Technion Israel Institute of Technology won the chemistry prize in 2011, and Ada E. Yonath of the Weizmann Institute of Science won in 2009.
In an exclusive interview with, Warshel said that his main motivation as a scientist is “to be the first to solve how things are working.”
“If the motivation is to make money ... those people won’t do original science,” Warshel said, calling this phenomenon “equally bad in Israel and in America.”
The announcement that Israeli citizens working abroad, like Warshel and Levitt, had won the Nobel Prize sparked a media debate in Israel on the country’s “brain drain,” where promising young professionals leave the Jewish state for better academic and industrial opportunities.
When it comes to governments funding science, Warshel believes investing in many smaller research projects is better than investing in “one flashy project.” The work of Warshel and his counterparts has long been supported by American federal science grants, but resources are still limited, and in both Israel and the U.S. it has become “less and less likely that the best idea will be funded,” he said.
During the 1960s, in the laboratory of Prof. Shneior Lifson at the Weizmann Institute of Science in Rehovot, Warshel and Levitt developed a computer model describing molecules classically, as composed of atoms, and predicted the structure of proteins under various conditions.
According to Warshel, atoms can be described as balls being bonded by springs. You can model a molecule by taking actual balls and connecting them with real springs. Then you can follow how the balls, representing atoms in a molecule, are connected, vibrate and move.
In a man-made model, the balls would soon fall apart because of gravity, whereas in molecules, the gravitational force is negligible. The alternative is to build a computer model that simulates the behavior of a real molecule. Assuming the atoms in the molecule behave according to Newton’s laws of physics, which are expressed by classical mechanics theory, and encoding the equations that describe Newtonian movement into the computer program, the behavior of the molecular system can be simulated.
One cannot, however, describe the breaking of a chemical bond with classical mechanics, as a ball and a spring. Warshel’s particular interest has been in modeling enzymatic reactions. Enzyme molecules are complex proteins that exist in most living organisms and engage in catalysis, which often involves the breaking of chemical bonds.
The Schrödinger equation, formulated in the 1920s by the Austrian physicist Erwin Schrödinger, describes how electrons are attracted to the nuclei of atoms. From this development evolved the field of quantum mechanics—an essentially different way of looking at a molecule, from the perspective of subatomic particles, like electrons, that exist inside it.
“There are not only springs of bonds to classical atoms, there are also the effects of the charges on classical atoms,” Warshel said.
Quantum mechanics computer modeling creates a map of an entire environment depicting where the electrons are likely to be and allowing researchers to predict what may happen next. But using quantum mechanics to calculate and model an entire environment of atoms that will interact with themselves and with the electrons becomes difficult for medium- or large-sized molecular systems. It would take years to model larger systems in this way, so Warshel and his fellow researchers developed improved computer modeling systems that look at the molecule both in terms of its classical particles (atoms) and its subatomic particles, like electrons.
“When you do it, you actually start to understand how enzymes work,” Warshel explained.
The Swedish Nobel Prize academy called the work by the three scientists “groundbreaking in that they managed to make Newton’s classical physics work side by side with the fundamentally different quantum physics. ... Previously, chemists had to choose to use either/or.”
There are also practical, real-world implications for the research, both in the commercial world and in medicine, Warshel told
For example, laundry detergent often has an enzyme that helps it digest dirt from clothes. Hypothetically, the enzyme protein could digest too slowly or stop working when the temperature rises.
“Since this program allows you to understand exactly how [an] enzyme is working, you [could know how to] change some ... amino acids in the enzyme and make it work better,” Warshel said.
There are also enzyme proteins in the body that can mutate and cause cancerous cell division, according to Warshel.
“If you understand how they work, you can try to find a way to make the broken enzyme not be so effective. In principle, you could look for a drug that when it is bound to the enzyme, it will make it stop working,” he said.
A similar scenario involves the HIV virus. When a new drug that is blocking an enzyme protein in the virus is developed, the virus changes sequence and stops binding well to the drug. But it is possible to look at the enzymes in the virus and analyze the mutations by which the virus tried to unbind and evade the drug, and the mutations that make the normal chemistry of the virus go on. Based on those two factors, researchers can suggest the next move of the virus. “It’s like playing chess,” Warshel said.
“In the cases where you want to understand how the virus or parts of it change in order to have resistance to drugs, knowing to model both the chemistry and the binding, and also knowing to model how stable the enzyme will be, is useful, and I believe would be more useful in the future,” he said.
According to statistics recently reported by Haaretz, Jews comprise 0.2 percent of the world population, yet 22 percent of all Nobel laureates have been Jewish. Warshel suggests that one factor behind this phenomenon is described in the book The Chosen Few: How Education Shaped Jewish History by Maristella Botticini and Zvi Eckstein, which stipulates that the survival of the Jewish faith—and by extension the survival of the Jewish people—has depended for centuries on the ability to read the Torah, enabling Jews to ultimately broaden their own education and develop practical skills. This strongly differentiated Jews from many other populations, which for centuries were generally illiterate.
In Warshel’s estimation, there is yet another theory behind Jewish scholarly and scientific success—one that is simpler and hits closer to home.
“There is the idea of the Jewish mother,” he quipped.
Your Turn
Comment Guidelines
View/Add Comments
Use Desktop Layout
Charisma Magazine — Empowering believers for life in the Spirit
Newsletters from Charisma
|
9538a725e0800105 | More “tweets”
Update (Feb. 4): After Luke Muelhauser of MIRI interviewed me about “philosophical progress,” Luke asked me for other people to interview about philosophy and theoretical computer science. I suggested my friend and colleague Ronald de Wolf of the University of Amsterdam, and I’m delighted that Luke took me up on it. Here’s the resulting interview, which focuses mostly on quantum computing (with a little Kolmogorov complexity and Occam’s Razor thrown in). I read the interview with admiration (and hoping to learn some tips): Ronald tackles each question with more clarity, precision, and especially levelheadedness than I would.
Another Update: Jeff Kinne asked me to post a link to a forum about the future of the Conference on Computational Complexity (CCC)—and in particular, whether it should continue to be affiliated with the IEEE. Any readers who have ever had any involvement with the CCC conference are encouraged to participate. You can read all about what the issues are in a manifesto written by Dieter van Melkebeek.
Yet Another Update: Some people might be interested in my response to Geordie Rose’s response to the Shin et al. paper about a classical model for the D-Wave machine.
“How ‘Quantum’ is the D-Wave Machine?” by Shin, Smith, Smolin, Vazirani goo.gl/JkLg0l – was previous skepticism too GENEROUS to D-Wave?
D-Wave not of broad enough interest? OK then, try “AM with Multiple Merlins” by Dana Moshkovitz, Russell Impagliazzo, and me goo.gl/ziSUz9
“Remarks on the Physical Church-Turing Thesis” – my talk at the FQXi conference in Vieques, Puerto Rico is now on YouTube goo.gl/kAd9TZ
Cool new SciCast site (scicast.org) lets you place bets on P vs NP, Unique Games Conjecture, etc. But glitches remain to be ironed out
79 Responses to “More “tweets””
1. Michael Says:
Hi, your third link is not functioning.
2. Scott Says:
Godammit, happens every time! Thanks, sorry, and fixed.
3. Michael Says:
Haha, no problem. Maybe run through all of them in a post preview before publishing each post?
4. Sol Warda Says:
Scott: How about this paper from your own Physics Dept.(correct me please if I’m wrong):
And this one from Harvard:
5. Scott Says:
Sol: Yep, them too!
6. Bram Cohen Says:
Color me not surprised by that first one
7. Sol Warda Says:
Bram: That’s an IBM conspiracy!!!!.
8. Sid Says:
Perhaps a stupid question:
Suppose we assume that human brains are simulatable by Turing machines (I think this is known?). Now, if you go out into the world, do experiments, and build a physical theory, then hopefully the only reason you built the theory in the first place was so that you can compute predictions of the theory. Indeed, we scorn any theory which is non-predictive.
And since this computation happened in the brain, then that means that any physical theory humans might care about has to be computable by Turing machines. Doesn’t this prove the Church-Turing hypothesis?
Of course, this leaves open the possibility that the Universe has some non-Turing computable law, but then we’d never find it because we would never build that theory in the first place.
What am I missing?
9. Scott Says:
Sid #8: Suppose someone handed you a box with flashing lights and aluminum foil that they claimed solved the halting problem. How could you test the box? Well, suppose you gave it thousands of Turing machines for which you already knew whether they halted or not—including TMs corresponding to highly-nontrivial mathematical statements (like Fermat’s Last Theorem), as well as TMs that you knew halted but only after quadrillions of steps—and for each one, the box immediately gave you what you knew to be the right answer.
In that case, maybe still your favored hypothesis would be that the box was “merely” a virtuoso piece of programming, rather than solving the halting problem in complete generality. However, suppose you then opened the box and discovered, to your amazement, that inside it was creating and evaporating tiny black holes. And suppose that, in order to describe the observed phenomena inside the box, physicists were forced to invent a new quantum theory of gravity that then also predicted the ability to solve the halting problem (as in Roger Penrose’s speculations).
If that happened, I’d say we had a pretty decent empirical case that the Physical Church-Turing Thesis was false. (FWIW, my personal prediction is that it won’t happen.)
Of course, if you care about the Physical Extended Church-Turing Thesis, then the situation is simpler in a way. If you’re willing to believe (say) that factoring is not in P, then someone “merely” needs to hand you a box that factors large numbers efficiently, which is a problem for which you can always verify the solutions yourself (you don’t need to generate special instances for which you already know the answer). Such a box could be, for example, a quantum computer.
10. Robert Says:
Irrelevantly, what is the demented gibberish which comes up when one clicks “Random page” in the Complexity Zoo?
11. Rahul Says:
Will D-Wave would comment on #2? Can we take their silence to imply they agree with #2?
12. Scott Says:
Robert #10: Dunno! I can ask my “tech support staff” to look into it, though if the Zoo if otherwise working, it’s probably not a high priority…
13. Scott Says:
Rahul #11: Someone from D-Wave will have to field that question, should they choose…
14. Douglas Knight Says:
Robert, it is spam.
15. Jair Says:
Hi Scott! I’m a longtime fan of your blog and I enjoyed your book as well. I was wondering if it is generally accepted that the class P encompasses all of the efficiently-solvable problems in the classical world? For example, even assuming factoring is not in P, is it absolutely necessary to have a quantum computer to break RSA? What evidence is there?
16. Scott Says:
Jair: It’s a good question. It’s logically possible that there could be something other than quantum mechanics that would also contradict the Extended Church-Turing Thesis, but I personally would say that we don’t have any convincing example of such a thing. Almost all the ideas I’ve seen are based on one of two things:
(1) Making measurements to exponential precision, speeding up a computation by an exponential amount (maybe using relativistic time dilation), etc. Ironically, a fundamental barrier on all of these ideas is placed by … wait for it … quantum mechanics! More specifically, the holographic principle from quantum gravity, which upper-bounds the number of qubits and number of computational steps that you can ever have in a bounded region of spacetime, without the region collapsing to form a black hole.
(2) Phenomena like protein folding, soap bubbles, spin glasses, etc., which can look to the uninitiated like they “instantaneously” solve NP-hard optimization problems. In all of these cases, the catch is that, if you set up one of the truly hard instances of the NP-hard problem, then you’d simply get stuck in a local optimum and never find the global optimum.
For more about these themes, see my old essay NP-complete Problems and Physical Reality, or this CS Theory StackExchange answer.
17. Jon Lennox Says:
So I had a thought about your FQXi talk, particularly when you discussed the limitations of “small causes have small effects” computers, and weird laws of physics that could lead to weird models of computation.
The thought that came to mind was quantum computers without decoherence. Intuitively, it seems like unitarity should lead to a “small causes can’t have big effects” property like your Digi-Comp II.
But this isn’t true, because a quantum computer can simulate a classical one, and as long as you give it input that isn’t in a superposition decoherence doesn’t matter. So where does the unitarity constraint go wrong? And is there any way we could strengthen it (e.g. by limiting what kind of input states are allowed, or something) that makes a purely unitary quantum computer look like your Digi-Comp?
18. Scott Says:
Jon #17: Good question! The short answer is that there’s a confusion of levels here. Yes, at the level of the amplitudes (i.e., the entire wavefunction), the Schrödinger equation is precisely unitary, meaning that “small causes” (i.e., rotating a state by a small angle in Hilbert space) can only ever have small effects. (Indeed, if we wrote out the “state” in a classical probabilistic theory as a huge vector of probabilities, then we’d find that precisely the same property held.)
By contrast, at the level of the actual observables—e.g., the positions or momenta of the actual particles, assuming one or the other to be relatively definite—quantum mechanics is just as “nonlinear” as any classical theory is. So for example, changing the position of a single particle by a tiny amount can completely change an experiment’s outcome.
(Keep in mind that even moving a single particle a tiny distance, can change our original quantum state to a new state that’s almost orthogonal to the first state! This simple mathematical fact is really the core of what’s going on here.)
Yes, you could define a quantum analogue of the Digi-Comp II. In fact BosonSampling, which I’ve been studying for the past 5 years or so, feels close to such an analogue (though the analogy is imperfect: for example, the DigiComp has writable internal state at the gates, whereas BosonSampling doesn’t). Indeed that’s one of the things that got me interested in the DigiComp in the first place. But the restrictions that you have to impose on universal QC to get BosonSampling are really orthogonal (har, har) to unitarity.
19. Rahul Says:
Naive question: Say, one day in the future we really do have a practical scalable super-duper, working QC that can factor large integers. Say one capable of breaking a 1024 bit RSA key.
How easy or straightforward is it to get this machine to solve other problems, say, a practical protein folding problem. In practice, is it a trivial algorithmic jump from an integer factorising QC to other hard problems of practical interest? Or not?
20. Scott Says:
Rahul #19: If you had a QC capable of breaking 1024-bit RSA keys, then in practice (and as far as we know today), it would probably be a scalable, universal QC with full quantum fault-tolerance. So then, yes, you could reprogram it to do protein folding or whatever else, in much the same way you could program your classical computer to do more than whatever it came with out of the box.
Another way to say it is that we have very few examples of “intermediate” QC models that do give you factoring but don’t give you universality. Actually, there’s one big exception to that: Cleve and Watrous showed in 2000 that factoring can be done using logarithmic-depth quantum circuits, whereas we certainly don’t know that to be true for arbitrary quantum computations. In practice, though, it’s hard for me to see how you could get fault-tolerant log-depth QC (with arbitrary couplings, as is needed here), without getting fault-tolerant polynomial-depth QC as well.
21. Rahul Says:
Scott #20:
Thanks. Interesting. So is reprogramming a QC to solve a different problem really analogous to reprogramming a conventional computer? Like, we needed Shor to come up with his algorithm, quite a pathbreaking / non-obvious step, before we knew how to use QCs to factor integers.
Is something major of that nature needed before QC’s can solve protein folding? Or is the jump really as trivial as “reprogramming” makes it sound.
How “universal” is a universal QC from a practical viewpoint? Asides of the scaling / error correction is it fairly clear what the architecture of a general purpose QC will be & what “reprogramming a QC” will actually be like?
Again, sorry if these are naive questions.
22. Sid Says:
Scott #9:
I agree that if I found a black box which instantaneously solved the halting problem, then it’d be strong evidence for the violation of the Church-Turing hypothesis.
But it’s hard for me to wrap my head around the possibility that we would be able to construct a theory from experiments that would say that the halting problem is solvable without actually telling us how to algorithmically solve the halting problem.
As an analogy, in the violation of the Extended Church-Turing hypothesis, we didn’t get a black-box from nature. Instead, Shor used quantum mechanics to tell us exactly how we could it.
I guess my question is: Are there “inferable” theories which would predict the ability to solve the halting problem? I realize that “inferable” doesn’t have a precise meaning, but it means something in the neighborhood of “theories that can be inferred using Turing machines in finite time using finite data”. Does this make sense?
23. Scott Says:
Rahul #21: In principle, it’s “trivial” to simulate protein folding using a QC: just run the Schrödinger equation forward! (Well, there are some tricks involved in mapping a continuous physics problem, involving particles and fields, onto a discrete system of qubits and gates. But those tricks, which go by names like “Trotterization,” are well-understood as well.) In other words, no feat of insight like Shor’s is needed to see that such systems can indeed be simulated in quantum polynomial time.
In practice, on the other hand, the polynomial blowups from the known simulation methods are often so large as to make the simulations impractical, even for simulating pretty simple molecules on a pretty large QC. See for example this recent paper. Personally, I’m confident that the polynomials can be brought down with brainpower and hard work, rather than reflecting some sort of fundamental barrier—simply because that’s almost always been true in the past, and there’s no good reason for it not to be true here. But the work to make quantum simulations practically-efficient will need to be invested, and no, it won’t be trivial.
As for architectures: the right way to think about programming a QC, is that you’re really programming a classical computer that in turn generates a sequence of laser pulses, etc. to control the QC.
(Though some people will try to tell you otherwise—because they want to write papers about it—there’s really no reason to worry about control flow, loops, conditionals, etc. in a QC. The classical control system can just handle all that stuff for you!)
So, it’s really no more difficult than classical software development plus quantum circuit synthesis, both of which we more-or-less know how to do.
24. Fred Says:
In your talk about the Church-Turing thesis you mention cooling as an interesting limitation on increasing computation speed, but even with an infinitely fast processing unit, you would still need to read/write data to some memory and moving information around is limited by the speed of light and storing information requires space. So memory is organized around the processing node in spheres of increasing radius – smaller and faster memory near the core and bigger but slower memory as the radius increases (memory caching in real computer approximates this speed/size trade-off).
25. Scott Says:
Fred #24: If it hadn’t been for the holographic / Bekenstein / black hole limitations coming from quantum gravity, you could’ve imagined that the memory cells could be made arbitrarily small to get around the speed-of-light problem.
26. Jair Says:
Thanks for taking the time to answer my question, Scott – that was helpful.
27. Sam Hopkins Says:
Do you consider it as a kind of accident that Turing machines are a good model of what is physically possible? For instance, logicians consider all kinds of models of computation that allow access to oracles that solve undecidable problems, but these would be bad models of what is physically possible. Why should we expect the entscheidungsproblem to have anything to do with the physical world?
28. Scott Says:
Sid #22: I don’t see any contradiction whatsoever in our brains being able to conceive, play with, and even test physical theories that imply an ability to solve computational problems that our brains themselves can’t solve, or can’t solve nearly as efficiently. Indeed, quantum mechanics seems to be an example of such a theory: humans discovered it, and discovered Shor’s algorithm, even though we ourselves can’t factor integers in polynomial time. If we discovered a physical theory that said, for example, that an infinite number of computational steps could be performed in one reference frame, in only a finite amount of time as measured by a second reference frame, and then the results could be communicated from the computer in the first frame to an observer in the second frame, I don’t see how that would be conceptually much different. (Again, though, I don’t think we actually live in the latter type of universe, because of Bekenstein-type limitations.)
29. Scott Says:
Sam #27: No, I don’t consider it an accident. First of all, the whole thing that convinced people of the Church-Turing Thesis in the first place is that Turing machines turn out to be equivalent to zillions of very different-looking models of computation that one can define (lambda calculus, RAM machines, cellular automata, etc). So then the only remaining question is: why did people in the 1930s converge on this large equivalence class of models, rather than (say) that equivalence class augmented by the ability to solve its own halting problem?
I can think of two answers to that question, both of them plausible to me.
The first answer is that the logicians (Turing, Gödel, Church, Kleene, Post, Rosser) who converged on this notion of computation, were implicitly guided by their experience of the physical world—or at the very least, by their experience of their own thought processes! And human thought processes (or the operation of humanly-buildable artifacts) simply don’t involve things like carrying out an infinite number of logical operations in a finite amount of time.
The second answer is that, while you can define Turing machines with oracles for the halting problem, etc., there’s some objective sense in which those things are less “natural” or “basic” than Turing machines themselves. In other words, I’m ready to accept that discrete operations on bits, iterated a finite number of times, are really “special” in the universe of mathematical concepts, alongside the primes, the complex numbers, groups, and all sorts of other concepts that one can generalize in every imaginable direction, but that seem more special than their generalizations.
30. Ashley Says:
For those interested, here is a reply from Geordie Rose to the Shin, Smith, Smolin, and Vazirani preprint.
The specific model proposed in Shin et al. focuses only on one experiment for which there was no expectation of an experimental difference between quantum and classical models and completely (and from my perspective disingenuously) ignores the entire remainder of the mountains of experimental data on the device. For these reasons, the Shin et al. results have no validity and no importance.
31. Fernando Says:
D-wave’s answer to Schin et al. paper:
Comments on that? Socks?
32. Fred Says:
Scott, in the talk and in #29 you mention hypothetical computation models which would include the ability to solve the halting problem.
Why do you pick this ability specifically?
Is it correct that being able to figure whether any arbitrary program can halt or not means that most open mathematical questions and NP problems could then be solved?
(by encoding them as a decision problem in a program and then analyzing whether the program halts or not).
Is it interesting because the halting problem itself is an example of NP-Hard problem that’s not NP-complete?
33. Rahul Says:
I just loved Luke Muelhauser for asking this question:
Even more I loved Ronald de Wolf for answering it straight up, no evasion, no vagueness, no waffling.
Quite high, let’s say probability greater than 2/3.
Clarity & precision indeed. I’d give an arm & leg to hear other QC experts to answer just this one question.
34. Scott Says:
Rahul: OK, I’ll give a 1/8 probability. This is one place where I guess Ronald and I disagree (but only about numbers, not about general principles). Now, may I have your arm and leg? 🙂
[However, on further introspection, part of what goes into my low estimate is game-theoretic: I’d much rather guess that we won’t have scalable QC in the next 20 years and then be pleasantly surprised, than guess that we will have it and then be disappointed.]
35. Rahul Says:
Scott #34:
Ah! Can we have your non-game theoretic estimate too? What would you guess then? As high as Ronald de Wolf’s 2/3?
Paging Gil Kalai, Peter Shor and some others that I’ve seen on this blog. Guys can we have your estimates too? I’d love to see more numbers from the experts. (sorry, my arm & leg are already pawned off to Scott 🙂 )
36. Scott Says:
Fred #32: There are plenty of problems that are NP-hard, presumably not in NP, but still computable—for example, the PSPACE- and EXP-complete problems. However, if you want to do something Turing-uncomputable, then it’s very hard to avoid solving the halting problem.
Mathematically, it’s possible: we’ve known since Kleene-Post and Friedberg-Muchnik in the 1950s that there are problems “intermediate” between computable and the halting problem, and it’s also clear that there are problems “incomparable” to the halting problem (e.g., random Turing degrees). However, I don’t know how to get either of those weird possibilities by any hypothetical change to the laws of physics of the sort people typically discuss—e.g., allowing an infinite number of steps in a finite amount of proper time for some observer. In practice, if your model of computation “breaks the Turing barrier” at all, then it essentially always seems to let you solve at least the halting problem, and possibly even more.
Again, here we’re talking only about the original, computability-theory Church-Turing Thesis. If we’re interested in the Extended (Polynomial-Time) Church-Turing Thesis, then there’s a much richer zoo of problems that we could imagine being efficiently solvable so as to falsify the ECT: NP-complete problems, factoring, graph isomorphism, lattice problems, BosonSampling…
37. Scott Says:
Rahul #35: Sorry, a game-theoretic estimate is all you’re getting from me. 🙂
38. Raoul Ohio Says:
I have a first impression comment on Rose’s remarks (without having read anything in depth):
D-Wave is promoting not a general purpose QC, but one that might work on a very restricted class of problems. So they demo one. An analysis shows it does not appear to be any better than a classical computation. Then Rose sez: “That analysis was only for a specific problem, so it has no relevance”. Looks to me like he is trying to have it both ways.
I totally admire Rose’s chutzpah and putting his own money into a long shot at the bleeding edge of technology. And you can’t blame him for trying to put the best spin on things. But, this role is a marketer, not a scientist.
Whatever D-Wave comes up with will be a good data point about what works, or is not easy to do.
39. Raoul Ohio Says:
Rahul, I will bet below Scott, maybe 1/64.
I am far from an expert, but I know skepticism is a good bet. I take pride in having spotted Santa Claus as a scam when I was about 3 or so.
40. Rahul Says:
Thanks! It’s interesting to hear these numbers. I’ve never seen any similar numerical estimates before. If you know any other researchers who’ve mentioned numbers I’d love to know.
41. Raoul Ohio Says:
I have often wished for bets by experts, particularly in cosmology. Here are a couple of things, say known by 2040, with my quick guesses afterwards:
Human to Mars surface and back alive? (1/8)
Human to Mars orbit, or beyond, and back alive? (1/3)
Life detected in solar system \ {earth}? (1/4)
Life detected outside solar system? (1/8)
Earth style DNA, RNA, amino acids, etc. common in asteroid belt dust (I made this one up)? (1/25)
Utilities selling fusion generated power? (1/100)
Dark matter particle identified? (1/5)
Dark energy is present? (2/3)
Cosmic inflation (current dogma version) considered to have actually happened? (1/2)
Knuth’s AoCP Vol 5 published? (1/10)
LaTeX 3 available? (1/3)
D-Wave bigger than Microsoft? (1/10000)
Ray Kurzweil on other side of singularity? (1/1000)
Steven Wolfram proposes new theory of everything? (4/5)
Scott buys into new SW TOE? (1/10)
Of course, a big problem will be dealing with the uncertainty in some of these outcomes.
42. Scott Says:
Ashley #30 and Fernando #31: I think Geordie’s response obfuscates some basic points.
Most importantly, there’s currently no known class of problems—as in not one, zero—on which the D-Wave machine gets any convincing speedup over what a classical computer can do. Random Ising spin problems were trumpeted as such a class by people misinterpreting the McGeoch-Wang paper (which D-Wave gave considerable publicity to). But then Troyer et al. found that, when you examine things more carefully, the speedup claims evaporate completely. So for Geordie to now say, “oh, that was never a good class of instances to look at anyway” takes some impressive chutzpah. Pray tell, then what is a good class of instances, where you get a genuine speedup? And if you can’t identify such a class, then do you retract all the claims you’ve made in the last year about currently being able to outperform classical computers?
Moving on to the Shin et al. paper, I actually agree with Geordie that the 8-qubit experiments done a couple years ago made a pretty good prima-facie case for small-scale entanglement in the D-Wave devices: that is, entanglement within each “cluster” in the Chimera graph. (And in any case, we already knew from the Schoelkopf group’s experiments that current technology can indeed entangle small numbers of superconducting qubits.)
But what about large-scale entanglement, involving all 100 or 500 qubits? As far as I know, the only evidence we had that such entanglement was present really did come from the Boixo et al. correlation analysis. And those correlation patterns are precisely what Shin et al. now say they can reproduce with a classical model. Of course, that doesn’t prove that there’s no large-scale entanglement in the D-Wave machine, but it does undermine the main piece of evidence we had for such entanglement.
It’s worth stressing, again, that even if large-scale entanglement is present in the D-Wave machine, one can still use the Quantum Monte Carlo method from the Boixo et al. paper, so there’s still no reason to expect any speedup over classical computing with the current technology. In other words: in the “debate” between Boixo et al. and Shin et al., both sides completely agree about the lack of speedup. The only point of disagreement is whether the lack of speedup is because D-Wave is doing a form of quantum annealing that can be efficiently simulated classically, or because they’re basically doing classical annealing. (More to the point: is there large-scale entanglement in the device or isn’t there?)
So, here’s a summary of what we currently know about the D-Wave devices:
1. Small-scale entanglement. Pretty good evidence that it’s present (though still not a smoking gun).
2. Large-scale entanglement. Who the hell knows?
3. Speedup over classical computers. No evidence whatsoever, and indeed careful comparisons have repeatedly found no speedup (though D-Wave’s supporters keep moving the goalposts).
43. rrtucci Says:
Rahul and Raoul, predicting the future with such vague priors is BS, even for a Bayesian like me, isn’t it?
If you want to put some data into your silly prior, consider the Manhattan project which took 6 years (almost to the day) from the day Einstein sent his letter to FDR (at which point nobody knew how to purify U235, and Plutonium had not been discovered yet) to the 2 explosions, using two completely different devices, in Japan. And the guys who did that didn’t have transistors or computers, or lasers or the internet or arxiv or latex or google, not even televisions… If you think those things haven’t increased the pace of scientific discovery, you are crazy.
Or consider the Apollo project: started 1961, man on the moon 1969.
Of course QCs will not be constructed by COMPLACENT people like you guys 🙂
44. Fred Says:
#43 “If you think those things haven’t increased the pace of scientific discovery, you are crazy.”
One could argue that all those things are terrible distractions as well.
Before the 60’s people had a better shot at getting things done (instead of debating endlessly on the internet for example).
45. Raoul Ohio Says:
I did not get a chance (in the late 1930’s) to guess that likelihood. I think that Atom bombs were in SF stories at that time, and as a kid I was a huge SF, fan, so I might have guessed high!
In the 1950’s I was an “outer space” fan and would have predicted man on the moon earlier than 1969 (not having yet learned that skepticism is usually a good bet). In those days they carried rocket launches live on the radio; about 20 hours of delay, then the count down, then usually an explosion. That was exciting!
46. quax Says:
Scott #42 … careful comparisons have repeatedly found no speedup (though D-Wave’s supporters keep moving the goalposts).
As presumably one of those D-Wave supporters I am wondering how exactly I would have moved the goalposts?
We clearly started out at the point where D-Wave couldn’t show a quantum speed-up. There machine has getting faster to the point of becoming something that is interesting to experiment with, but we still don’t have any evidence for quantum speed-up.
So I very much like them to continue to investigate if we can find such evidence, whereas you seem to imply that it’s a waste of time.
From my POV none of the stances have changed. How is this moving the goalposts?
47. Sam Hopkins Says:
I wonder if a black box that solves arbitrary Diophantine equations is more plausible physically (in some sense) than one that solves the halting problem. It would be very interesting at least to come up with a physical theory that could find integer solutions to polynomial equations.
48. Rahul Says:
moving the goalposts = changing what’s the right problem they want benchmarkers to compare D-Wave’s speed to?
49. Rahul Says:
rrtucci #43:
With that outlook anything’s possible really. Can’t argue with that. We may have a Mars colony by 2018, fusion may provide unlimited power for vehicles by 2020 & by 2025 we won’t even need vehicles since quantum transporters will have been invented.
50. quax Says:
Rahul #48, no that misses the point, my understanding is that any problem set that’ll allow to demonstrate quantum speed-up would do.
51. Sniffnoy Says:
Off-topic — but then what’s really off-topic in a “tweets” thread? — but anyway, apparently the matrix multiplication exponent has been improved again! (Assuming the correctness of this, anyway.) It looks to be even tinier than the other recent improvements, but, still…
52. Raoul Ohio Says:
El Reg addresses the D-Wave dustup:
53. Raoul Ohio Says:
I predict the sum of all future improvements in the exponent is less than 0.4.
54. Scott Says:
Sniffnoy #51: Thanks for the pointer—I hadn’t seen that! Francois Le Gall is a very careful guy, and it’s perfectly plausible on its face that you could lower ω further with more careful optimization. Given the amount of flak I got for blogging the last lowerings of ω, 🙂 I probably won’t blog this one, but it’s still nice to know.
55. Scott Says:
Raoul #52: My adviser was a “boffin”? LOL!
56. Alexander Vlasov Says:
rrtucci #43
Funny, because I certainly remember how I read somewhere in 1995 about plans to build quantum computer and demonstrate first useful application in 1999 and thought – “why not, it was enough only about 5 years for Manhattan project”.
Scott #34
Relating expectation times – if the rough relation between probabilities and times may be expressed as
T1/T2 = ln(1-p2)/ln(1-p1)
for p2=1/8, p1=2/3 we have T2/T1 ~ 8 and if de Wolf’s estimation let us hope to have QC during next 20-60 years,
should your estimation be considered as suggestion to consider time scales such as 160-480 years?
57. Scott Says:
quax #46: I wasn’t talking about you, though I’ll note that your position is very much opposed to Geordie’s (at least his position a year ago, when he was loudly trumpeting claims of a speedup)!
In general, however, I’ve noticed some D-Wave supporters (e.g., ones I’ve met in “real life”) headed down the following slip ‘n slide over the past year:
1. Aha—D-Wave does give a speedup on random instances! McGeoch-Wang! 3600 times faster! Take that, skeptics!
2. Well, obviously you’re not going to get a speedup on random instances: it was stupid and misguided for the skeptics to demand that in the first place. To get an asymptotic speedup, you’re going to need more structured instances. Just wait; such a speedup will be shown any day.
3. Obviously asking for a speedup at all is the completely wrong question! That’s like criticizing the Wright Brothers’ first plane for not being fast and comfortable enough. As everyone knows, the real point is just that, for the first time, D-Wave has demonstrated large-scale quantum behavior in hundreds of superconducting qubits…
Of course, if the Shin et al. claims hold up, then we can safely predict the slip ‘n slide will take us to
4. Obviously no one expected to see large-scale quantum behavior at this point, but…
58. Scott Says:
Alexander #56: I don’t think linear extrapolation really works here; technology development is not a Poisson process. If it helps, I’ll give maybe 1/2 odds of a scalable QC within the next 100 years. After that, I’d say my uncertainty becomes dominated by Knightian uncertainty about the future of civilization itself (for example, about whether there will even be a civilization).
59. Fred Says:
A bit off-topic, a question about QM/QC.
Does the amount of entanglement in a system have any particular measurable “cost” or is it just a “free lunch”?
I have two situations, similar to the setup in the EPR paradox.
1) two entangled particles.
2) the same two particles but not entangled.
Besides saying that in one case the wave function is shared and that in the other case we have two wave functions, is there any other difference between the two cases, e.g. slightly different energy?
Are we really able to entangle more and more particles (i.e. realizing qbits) without limit?
It would seem reasonable to think that nature has to make some sort of “trade-off” to be able to entangle more and more particles (maybe that question is the same as asking if there are any hidden local variables?).
60. Rahul Says:
Assuming we want a public crypto alternative that’s resistant to Shor-breaking if and when a scalable QC arrives, how good are the lattice based crypto options?
i.e. Is it proven than a QC-based attack is impossible / hard? Or are we just waiting for the right kind of “Shor-plus” that can defeat them?
61. Scott Says:
Fred #59: Asking whether there’s any limit to the number of particles you can entangle, is a lot like asking whether there’s any limit to the number of people who can share a secret.
In principle, the answer is no: just keeping letting in more and more people! The effort only scales linearly with the number of people, not exponentially or anything crazy like that.
In practice, the difficulty is obvious: the more people you let in, the greater the chance that one of them will upload the secret to WikiLeaks. Then, ironically, you’ll have overshot your target: while you “merely” wanted a thousand or a million people (all of whom you were keeping track of) in on your secret, now the entire world is in on it.
In the same way, the more particles you entangle, the greater the chance that one of them will overshoot the target, entangling itself with the measuring apparatus, the air molecules and EM field in the room, etc. In which case, your first response might be “great! even more entanglement than I’d bargained for!” But once the entanglement has spread so far that you can’t keep track of it, it’s just as if a measurement has been made, and you no longer have any hope of doing an interference experiment that would reveal the entanglement.
Now, the whole point of quantum fault-tolerance is to spread the entanglement “secret” more cleverly among the particles, so that even if (say) any 1% of the particles get entangled with the environment, the secret is still secure with the other 99%—and indeed, the secret can then be “fortified” so that it remains secure even if another 1% of the particles get entangled with the environment, etc. If and when quantum fault-tolerance becomes a reality, there should no longer be any practical limit on the number of particles that can be shown in experiments to be entangled (the current limit, I think, is a few billion particles, e.g. in solid-state and superconducting experiments).
If there were a limit (not coming from cosmology) on the number of particles you could entangle, then something would have to be wrong either with quantum mechanics itself, or with some other extremely basic assumption of modern physics.
62. Scott Says:
Rahul #60:
Is it proven than a QC-based attack [on lattice-based crypto] is impossible / hard? Or are we just waiting for the right kind of “Shor-plus” that can defeat them?
Currently, there’s no security proof for any public-key cryptosystem, even if you’re willing to assume (say) P≠NP, or NP⊄BQP, or the existence of one-way functions. You always have to make some problem-specific hardness assumption, so the only questions are which assumption, how well has it been studied, etc.
Having said that, the approximate shortest vector problem (and related lattice problems) have been very well-studied by now, and people have tried and failed to find quantum algorithms for these problems for at least 16 years. And we understand in some detail why “Shor-plus” won’t suffice: at the very least, you’ll need “Shor-plus-plus-plus.” 🙂 (In particular, you’d first need to generalize Shor’s algorithm to nonabelian groups like the dihedral group; and even then, lattice problems don’t exactly fit into the nonabelian HSP framework, but only approximately.)
63. Raoul Ohio Says:
Scott #55,
BTW, in British slang,
(1) a Boffin is a top notch scientist,
(2) an Egghead, Nerd, or Geek is scientist,
(3) social scientists and philosophers are “trick cyclists”.
64. Jon Lennox Says:
Scott at #58: Personally, my probability for the development of Scalable QC is dominated not by my uncertainty about the difficulty of achieving it, but by uncertainty about the future of funding to do the work.
On the one hand, basic science research funding has had a terrible time of late; on the other hand, VCs backing a DWave++, or Google, or the NSA, might decide to open up a firehose of money to fund development of production-quality Scalable QC.
Yes, the Manhattan Project achieved an atomic bomb in five years; but that’s because the U.S. Government threw the equivalent of thirty billion dollars (in current money) at it.
My estimate of the likelihood of this happening is pretty thoroughly Knightian.
65. Fred Says:
Scott #61: thanks for the detailed answer Scott, that’s a really striking analogy 🙂
66. rrtucci Says:
Jon Lennox, the sad thing is that the NSA and British Intelligence spent 100 billion in just one year to hack into your smart phone and Angry Birds. That’s the evil of classified research. Mediocrity, favoritism, fraud and abuse are quickly made top secret.
How about the probability that China will pour 30 billion into developing a QC? I’ve noticed that in the last 2 years, the number of QC papers in arXiv with monosyllabic Chinese names as authors has skyrocketed. China has 4 times the population of the US. They are hungry to prove themselves. They have given carte blanche to their QC scientists. Their students consistently do better than American students in Science.
67. quax Says:
Scott #57 Thanks for elaborating. Obviously I prefer to look at the bright side of the story, but I also want to stick to the facts.
And there’s no two ways about it, so far evidence for quantum speed-up has been elusive. Yet, hope flows eternal 🙂
68. rrtucci Says:
correction: it’s probably more like 100 billion dollars total, not in just one year
69. Rahul Says:
What’s the latest on the experimental work on Boson Sampling? Can I request a blog post? That work seems tantalizingly interesting in the early optics experiments you posted about a year ago. Have the experimentalists succeeded in scaling those? Are we getting closer to the point where Boson Sampling is solving a problem large enough to be challenging for a classical simulation?
70. rrtucci Says:
NSA metadata reveals that
Feb 6:
9:00 AM
Scott Aaronson reads cover story in TIME magazine which is highly favorable to D-Wave
9:15 AM
Scott plays Angry Birds for 15 minutes
9:30 AM
Scott plays with daughter Lily for 30 minutes using Xbox
Scott uses iphone to call MIT infirmary, complaining of severe abdominal pains
Scott hospitalized at Mass General for severe bleeding ulcer, probably stress induced
71. Scott Says:
Rahul #69: Dude, I posted about Scattershot BosonSampling (which was a spectacular idea that emerged from the experimental groups) just a few months ago. I haven’t heard anything similarly big since then, but if I do hear something, then sure, I’ll blog about it.
(As I keep pointing out, science doesn’t move at blog-speed or news-cycle-speed, no matter how many people want it to! And also, the experimentalists don’t always keep me in the loop! I learned about most of the earlier experiments when the papers appeared on the arXiv, just like everyone else.)
72. Raoul Ohio Says:
Your kidding? Time Magazine is still being published?
73. rrtucci Says:
74. Rahul Says:
Are D-Wave’s innards really the coldest spot in the Universe? Or is that TIME article starting off with an exaggerated fantasy? Too bad the whole thing’s gated.
75. quax Says:
Rahul, #74 they don’t hold the record, but they run much colder than deep space. (Article at the link has a nice chart that shows this in context).
The hold-over background radiation makes deep space cozy warm in comparison.
76. Rahul Says:
Thanks! But colder than deep space is hardly special these days right? A lot of these solid state physics experiments routinely exceed 1 K?
To make D-Wave sound like *the* coldest spot in the universe is a huge exaggeration. And he makes it sound as if this changes something major in how astronomers view the coldness of the universe.
77. Scott Says:
Gentlemen: You can move this discussion over to today’s post, if you like. 🙂
78. How Much Power Will Quantum Computing Need? | Vanquish Merchant Bank News Says:
[…] Scott Aaronson , a theoretical computer scientist at MIT and a D-Wave critic, seemed bemused by the idea of D-Wave having a power advantage of any sort. Referring to D-Wave’s reliance on a crygenic cooler he wrote in an email: “It’s amusing chutzpah to take such a gigantic difficulty and then present it as a feature.” He pointed out that D-Wave might need an even more power-hungry cooling system to create lower temperatures that improve its quantum processors’ chances of a “speedup” advantage over classical computing in the future. […]
79. How Much Power Will Quantum Computing Need? | 守心斋 Says:
|
3c4a77f2502ab7a7 | Fritz London
Frank Field
The title of this little essay says it all: it is a memory after approximately 67 years of a man of whom I stood properly in awe at the time and for whom I have now even greater respect. I should say at the outset that I am writing this completely from memory with no reference to texts or other historical sources. Thus there may be minor errors in dates and even facts, but so be it; I have no desire at my present advanced age to engage in the game of academic scholarship.
I was a student at Duke University in the years 1939-1947, a chemistry major, and by the Fall of 1942 I had done all the undergraduate courses in physical chemistry, and it was a time for me to move up to a graduate level course. The next course available to me was Chemistry 265, Chemical Physics, Statistical Theory, in short, statistical mechanics. This course brought me in contact with Professor Fritz London, an eminent theoretical physicist who had been recruited to Duke in 1938 by Paul Gross, the Chairman of the Chemistry Department. Dr. Gross was a knowledgeable, effective, and energetic chairman, with a wide network of scientific acquaintances, and it was doubtless through this network that he learned of the availability of Professor London. Gross offered London a professorship at Duke. London had been the victim of the rampant Hitlerian anti-Semitism in the Europe of the 1930’s, which resulted in his having to move several times from academic post to academic post. According to the legend in the Duke Chemistry Department when I was there, London was in Paris in 1938, perhaps at the Sorbonne, and all Europe was threatened by Nazi expansionism. He was in conversation with a French colleague, possibly about the desirability of his accepting the offer from an obscure university in the southern part of the United States, and in the course of this discussion the matter of the fate of France (an thus of his university position in France) arose. The story has it that the French colleague proclaimed, “Ils résisteront”! London’s reply, based on his own bitter experience with the situation in Europe, was, “Non. Ils ne résisteront pas.” This opinion doubtless influenced his decision to accept the Duke offer.
He arrived at Duke in 1939. The University scientific community, especially the Chemistry community, was overjoyed to have obtained the services of a man with London’s scientific reputation: the Heitler-London theory of covalence, the theory of dispersion forces, and the role of Bose-Einstein condensation in superconductivity and superfluidity. He was assigned a small office just off the lobby of the old Chemistry Building on the West Campus, and he used this for all of the time that I knew him at Duke. I was told that when he started teaching in his first year his class was comprised, to a considerable extent, of the Duke Chemistry faculty, delighted to be able to learn startling new things at the feet of the world-renowned master. It must be remembered that in 1939 the quantum mechanics was only about 15 years old and was not widely known, except by the real scientific cognoscenti, so learning more about it was very attractive.
London’s office doubled as his classroom. It was longer than wide, and his desk was at the window end with the blackboard at the other end and desks for the students arranged in front of it, but allowing room for London to stand while he lectured. He did most of his work at home, so he mostly arrived in his office just in time for this class. He moved quite rapidly, and generally hurried in just before the lecture was to start. The class was scheduled for the period 11:30 AM-12:20 PM, but this was a nominal schedule. He usually paid little attention to time, and on more than one occasion we students had to attempt to stop his lecturing enough before 1:00 PM for us to get to the dining room for lunch before it closed. In my time the class consisted of about 5-7 students. London lectured rapidly and wrote rapidly, filling the blackboard with equations as he went. There was no text for the course, so we students tried to keep up with him as best we could. This resulted in a set of quite rough notes for us, and it was common practice for the students to make fair copies of their notes at some appropriate time after the class. As I recall, London made sparing use of lecture notes, and most of the very abstruse information that he was presenting came straight out of his head. He was as cavalier about setting limits on his writing as he was about setting limits on the extent of his lectures. The information on the blackboard was retained for continuity until erasures had to be made to enter new information, and he showed a considerable ingenuity in improvising writing space. Writing out onto the margins of the blackboard was a commonplace, and on one memorable occasion the last equations needed to complete a derivation were written on the horizontal shelf of the blackboard, the usual function of which was to hold chalk and erasers.
Things had shaken down a bit by the time I entered the statistical mechanics course in 1942. Professor Gross had been able to populate the Chemistry Department with a number of students who, like me, were eager to study with Professor London. He taught one of two courses on successive years: statistical mechanics and quantum mechanics. At least, that is the description given in the university catalog; in reality, for us students it was a three-year regimen. One started with one or the other of the courses; it didn’t much matter because the purpose of the first year was for the student to learn to understand the professor’s heavily German-accented English, European notation (figure sevens with crossed tails; figure ones that looked like teepees), and other idiosyncrasies. The next year one took the alternate course with the idea of learning the science, and when that was accomplished one took the first course over again with the hope of understanding the science that the professor had been talking about that first year. All of this was intellectually hard on me because I was only an undergraduate when I started, but it wasn’t a cakewalk for the other students either (and this went for the faculty students of the first year.) Indeed, it was a large intellectual quantum jump for all of us, for our previous training gave us little or no preparation for what London was trying to teach us. Thus in my statistical mechanics course classical mechanics, vector analysis, and elementary statistical concepts were presented in about three weeks, and then on to the real work of the course, the Liouville Theorem, LaGrangian multipliers, gamma space and mu space, and then Maxwell-Boltzmann, Bose-Einstein, Fermi-Dirac, and the manifold practical applications of statistical mechanics. It was tough, but exciting.
But for me the real excitement came in the quantum mechanics course. Statistical mechanics is in large part a classical discipline with roots extending back to deep into the nineteenth century. Ludwig Boltzmann was a towering figure, but since he lived in about the years surrounding 1850 it was a difficult to personalize him—he was an important scientific figure, but an historical figure. But quantum mechanics was a completely different matter. One can date the beginning of quantum mechanics to 1925 (I here make the distinction that Professor London taught me of differentiating between the quantum theory of Bohr and the quantum mechanics of Heisenberg, Schrödinger, and Dirac), so that in the early 1940’s it was still a new, fascinating, and mysterious scientific discipline. And for me the real marvel of the London course was that London was contemporary with and acquainted with these early giants of quantum mechanics; indeed, he was one of them. Thus he was able to personalize them to the extent that they became real people for us students.
I shall give two examples of this. He started the course with a discussion of the ultra-violet catastrophe and Max Planck’s solution to the problem, and then he moved on to the Bohr theory of quantized orbits of the electrons. From there we learned of the limitations of this theory and the attempt to extend it with the Correspondence Principle. Of course this attempt met with but limited success, and the theoretical physics community was temporarily stumped. Enter Werner Heisenberg! Following a new scientific philosophy that was developing at this time, namely, that theories should be based solely on physically observable quantities, Heisenberg set out to develop a theory of atomic spectra that would involve the two indices that characterize a spectral line; that is, the quantum numbers characterizing the two states involved in the transition. London asserted that Heisenberg ab initio developed an algorithm that accomplished this, and London led us through the details of the development. I deeply regret that my class notes on this development have been lost, because I have never seen it expounded in any of the quantum mechanics texts that I have examined over the years. Heisenberg expanded his two-index array into a general theory of atomic mechanics, and it was later pointed out by one of the eminent mathematicians of the time (Hilbert?) that these two index arrays were in fact matrices, and Heisenberg’s procedure became known as matrix mechanics. For me it was really stunning to be exposed at close hand to Heisenberg’s monumental intellectual accomplishment, and it would never had happened without a teacher of London’s experience and stature.
The second example of personalizing the quantum science comes from London’s presentation of Dirac’s development of relativistic quantum mechanics. This involves the use of the Einstein relation between the mass of an object and its velocity, and Dirac used this variable mass in the Schrödinger equation. More regrets on my part, for over the years I have completely forgotten the details of London’s presentation, but I do remember the aspect of the matter that is of importance for this story. That is, the relation between mass and velocity contains a square root function, which, of course, has two roots. The consequence of this in relativistic quantum mechanics is that there is a set of energy levels for an electron in an atomic system corresponding to each root. The theory is quite clear that no distinction can be made with regard to the validity of each set of levels, which had the startling implication that negative energy levels for the electron exists. In explaining this London went on to say that Dirac was not overly upset about this, for he felt that there would be no probability of a transition between the two sets of states. However, he nevertheless calculated this probability and he found
(and here I wish that I could reproduce London’s look and his German-accented voice) that, to his horror, the probability was finite. Thus we had the prediction of the existence of the positron and the other anti-particles. I don’t have any idea about how London knew of Dirac’s feelings in the matter, but the core of the story vividly represents the reaction that a scientific worker of the time could have had.
Fritz London was obviously an intellectual giant, at least the equal of others that I have met in the course of my scientific career. Although I did no theoretical physics work myself, the learning he gave me proved to be very valuable. It hardly needs to be said that he played a very important role in the process whereby Duke University grew from a fledgling college to a major intellectual and scientific center.
29 April 2009
Frank Field obtained his B.Sc and PhD degrees at Duke University in 1943 and 1948. After various technical positions in the Standard Oil Co (New Jersey) and successors from 1952 - ‘70, he became the Camille and Henry Dreyfus Professor at the Rockefeller University, New York, 1970 -’88. Upon his retirement in 1988
he moved to Oak Ridge TN, later to Durham, N.C. where he passed away in 2013. |
ddde7d5ff73d6eea | Quantum mechanics
Quantum mechanics, science dealing with the behaviour of matter and light on the atomic and subatomic scale. It attempts to describe and account for the properties of molecules and atoms and their constituents—electrons, protons, neutrons, and other more esoteric particles such as quarks and gluons. These properties include the interactions of the particles with one another and with electromagnetic radiation (i.e., light, X-rays, and gamma rays).
The behaviour of matter and radiation on the atomic scale often seems peculiar, and the consequences of quantum theory are accordingly difficult to understand and to believe. Its concepts frequently conflict with common-sense notions derived from observations of the everyday world. There is no reason, however, why the behaviour of the atomic world should conform to that of the familiar, large-scale world. It is important to realize that quantum mechanics is a branch of physics and that the business of physics is to describe and account for the way the world—on both the large and the small scale—actually is and not how one imagines it or would like it to be.
The study of quantum mechanics is rewarding for several reasons. First, it illustrates the essential methodology of physics. Second, it has been enormously successful in giving correct results in practically every situation to which it has been applied. There is, however, an intriguing paradox. In spite of the overwhelming practical success of quantum mechanics, the foundations of the subject contain unresolved problems—in particular, problems concerning the nature of measurement. An essential feature of quantum mechanics is that it is generally impossible, even in principle, to measure a system without disturbing it; the detailed nature of this disturbance and the exact point at which it occurs are obscure and controversial. Thus, quantum mechanics attracted some of the ablest scientists of the 20th century, and they erected what is perhaps the finest intellectual edifice of the period.
Historical basis of quantum theory
Basic considerations
At a fundamental level, both radiation and matter have characteristics of particles and waves. The gradual recognition by scientists that radiation has particle-like properties and that matter has wavelike properties provided the impetus for the development of quantum mechanics. Influenced by Newton, most physicists of the 18th century believed that light consisted of particles, which they called corpuscles. From about 1800, evidence began to accumulate for a wave theory of light. At about this time Thomas Young showed that, if monochromatic light passes through a pair of slits, the two emerging beams interfere, so that a fringe pattern of alternately bright and dark bands appears on a screen. The bands are readily explained by a wave theory of light. According to the theory, a bright band is produced when the crests (and troughs) of the waves from the two slits arrive together at the screen; a dark band is produced when the crest of one wave arrives at the same time as the trough of the other, and the effects of the two light beams cancel. Beginning in 1815, a series of experiments by Augustin-Jean Fresnel of France and others showed that, when a parallel beam of light passes through a single slit, the emerging beam is no longer parallel but starts to diverge; this phenomenon is known as diffraction. Given the wavelength of the light and the geometry of the apparatus (i.e., the separation and widths of the slits and the distance from the slits to the screen), one can use the wave theory to calculate the expected pattern in each case; the theory agrees precisely with the experimental data.
Early developments
Read More on This Topic
principles of physical science: Rise of quantum mechanics
Planck’s radiation law
By the end of the 19th century, physicists almost universally accepted the wave theory of light. However, though the ideas of classical physics explain interference and diffraction phenomena relating to the propagation of light, they do not account for the absorption and emission of light. All bodies radiate electromagnetic energy as heat; in fact, a body emits radiation at all wavelengths. The energy radiated at different wavelengths is a maximum at a wavelength that depends on the temperature of the body; the hotter the body, the shorter the wavelength for maximum radiation. Attempts to calculate the energy distribution for the radiation from a blackbody using classical ideas were unsuccessful. (A blackbody is a hypothetical ideal body or surface that absorbs and reemits all radiant energy falling on it.) One formula, proposed by Wilhelm Wien of Germany, did not agree with observations at long wavelengths, and another, proposed by Lord Rayleigh (John William Strutt) of England, disagreed with those at short wavelengths.
In 1900 the German theoretical physicist Max Planck made a bold suggestion. He assumed that the radiation energy is emitted, not continuously, but rather in discrete packets called quanta. The energy E of the quantum is related to the frequency ν by E = hν. The quantity h, now known as Planck’s constant, is a universal constant with the approximate value of 6.62607 × 10−34 joule∙second. Planck showed that the calculated energy spectrum then agreed with observation over the entire wavelength range.
Einstein and the photoelectric effect
Test Your Knowledge
General Science: Fact or Fiction?
In 1905 Einstein extended Planck’s hypothesis to explain the photoelectric effect, which is the emission of electrons by a metal surface when it is irradiated by light or more-energetic photons. The kinetic energy of the emitted electrons depends on the frequency ν of the radiation, not on its intensity; for a given metal, there is a threshold frequency ν0 below which no electrons are emitted. Furthermore, emission takes place as soon as the light shines on the surface; there is no detectable delay. Einstein showed that these results can be explained by two assumptions: (1) that light is composed of corpuscles or photons, the energy of which is given by Planck’s relationship, and (2) that an atom in the metal can absorb either a whole photon or nothing. Part of the energy of the absorbed photon frees an electron, which requires a fixed energy W, known as the work function of the metal; the rest is converted into the kinetic energy meu2/2 of the emitted electron (me is the mass of the electron and u is its velocity). Thus, the energy relation is
special composition for article 'Quantum Mechanics'If ν is less than ν0, where hν0 = W, no electrons are emitted. Not all the experimental results mentioned above were known in 1905, but all Einstein’s predictions have been verified since.
Bohr’s theory of the atom
A major contribution to the subject was made by Niels Bohr of Denmark, who applied the quantum hypothesis to atomic spectra in 1913. The spectra of light emitted by gaseous atoms had been studied extensively since the mid-19th century. It was found that radiation from gaseous atoms at low pressure consists of a set of discrete wavelengths. This is quite unlike the radiation from a solid, which is distributed over a continuous range of wavelengths. The set of discrete wavelengths from gaseous atoms is known as a line spectrum, because the radiation (light) emitted consists of a series of sharp lines. The wavelengths of the lines are characteristic of the element and may form extremely complex patterns. The simplest spectra are those of atomic hydrogen and the alkali atoms (e.g., lithium, sodium, and potassium). For hydrogen, the wavelengths λ are given by the empirical formula
special composition for article 'Quantum Mechanics'where m and n are positive integers with n > m and R, known as the Rydberg constant, has the value 1.097373157 × 107 per metre. For a given value of m, the lines for varying n form a series. The lines for m = 1, the Lyman series, lie in the ultraviolet part of the spectrum; those for m = 2, the Balmer series, lie in the visible spectrum; and those for m = 3, the Paschen series, lie in the infrared.
Bohr started with a model suggested by the New Zealand-born British physicist Ernest Rutherford. The model was based on the experiments of Hans Geiger and Ernest Marsden, who in 1909 bombarded gold atoms with massive, fast-moving alpha particles; when some of these particles were deflected backward, Rutherford concluded that the atom has a massive, charged nucleus. In Rutherford’s model, the atom resembles a miniature solar system with the nucleus acting as the Sun and the electrons as the circulating planets. Bohr made three assumptions. First, he postulated that, in contrast to classical mechanics, where an infinite number of orbits is possible, an electron can be in only one of a discrete set of orbits, which he termed stationary states. Second, he postulated that the only orbits allowed are those for which the angular momentum of the electron is a whole number n times ℏ (ℏ = h/2π). Third, Bohr assumed that Newton’s laws of motion, so successful in calculating the paths of the planets around the Sun, also applied to electrons orbiting the nucleus. The force on the electron (the analogue of the gravitational force between the Sun and a planet) is the electrostatic attraction between the positively charged nucleus and the negatively charged electron. With these simple assumptions, he showed that the energy of the orbit has the form
special composition for article 'Quantum Mechanics'where E0 is a constant that may be expressed by a combination of the known constants e, me, and ℏ. While in a stationary state, the atom does not give off energy as light; however, when an electron makes a transition from a state with energy En to one with lower energy Em, a quantum of energy is radiated with frequency ν, given by the equation
special composition for article 'Quantum Mechanics' Inserting the expression for En into this equation and using the relation λν = c, where c is the speed of light, Bohr derived the formula for the wavelengths of the lines in the hydrogen spectrum, with the correct value of the Rydberg constant.
Bohr’s theory was a brilliant step forward. Its two most important features have survived in present-day quantum mechanics. They are (1) the existence of stationary, nonradiating states and (2) the relationship of radiation frequency to the energy difference between the initial and final states in a transition. Prior to Bohr, physicists had thought that the radiation frequency would be the same as the electron’s frequency of rotation in an orbit.
Scattering of X-rays
Soon scientists were faced with the fact that another form of radiation, X-rays, also exhibits both wave and particle properties. Max von Laue of Germany had shown in 1912 that crystals can be used as three-dimensional diffraction gratings for X-rays; his technique constituted the fundamental evidence for the wavelike nature of X-rays. The atoms of a crystal, which are arranged in a regular lattice, scatter the X-rays. For certain directions of scattering, all the crests of the X-rays coincide. (The scattered X-rays are said to be in phase and to give constructive interference.) For these directions, the scattered X-ray beam is very intense. Clearly, this phenomenon demonstrates wave behaviour. In fact, given the interatomic distances in the crystal and the directions of constructive interference, the wavelength of the waves can be calculated.
In 1922 the American physicist Arthur Holly Compton showed that X-rays scatter from electrons as if they are particles. Compton performed a series of experiments on the scattering of monochromatic, high-energy X-rays by graphite. He found that part of the scattered radiation had the same wavelength λ0 as the incident X-rays but that there was an additional component with a longer wavelength λ. To interpret his results, Compton regarded the X-ray photon as a particle that collides and bounces off an electron in the graphite target as though the photon and the electron were a pair of (dissimilar) billiard balls. Application of the laws of conservation of energy and momentum to the collision leads to a specific relation between the amount of energy transferred to the electron and the angle of scattering. For X-rays scattered through an angle θ, the wavelengths λ and λ0 are related by the equation
special composition for article 'Quantum Mechanics'The experimental correctness of Compton’s formula is direct evidence for the corpuscular behaviour of radiation.
Broglie’s wave hypothesis
Faced with evidence that electromagnetic radiation has both particle and wave characteristics, Louis-Victor de Broglie of France suggested a great unifying hypothesis in 1924. Broglie proposed that matter has wave as well as particle properties. He suggested that material particles can behave as waves and that their wavelength λ is related to the linear momentum p of the particle by λ = h/p.
In 1927 Clinton Davisson and Lester Germer of the United States confirmed Broglie’s hypothesis for electrons. Using a crystal of nickel, they diffracted a beam of monoenergetic electrons and showed that the wavelength of the waves is related to the momentum of the electrons by the Broglie equation. Since Davisson and Germer’s investigation, similar experiments have been performed with atoms, molecules, neutrons, protons, and many other particles. All behave like waves with the same wavelength-momentum relationship.
Basic concepts and methods
Bohr’s theory, which assumed that electrons moved in circular orbits, was extended by the German physicist Arnold Sommerfeld and others to include elliptic orbits and other refinements. Attempts were made to apply the theory to more complicated systems than the hydrogen atom. However, the ad hoc mixture of classical and quantum ideas made the theory and calculations increasingly unsatisfactory. Then, in the 12 months started in July 1925, a period of creativity without parallel in the history of physics, there appeared a series of papers by German scientists that set the subject on a firm conceptual foundation. The papers took two approaches: (1) matrix mechanics, proposed by Werner Heisenberg, Max Born, and Pascual Jordan, and (2) wave mechanics, put forward by Erwin Schrödinger. The protagonists were not always polite to each other. Heisenberg found the physical ideas of Schrödinger’s theory “disgusting,” and Schrödinger was “discouraged and repelled” by the lack of visualization in Heisenberg’s method. However, Schrödinger, not allowing his emotions to interfere with his scientific endeavours, showed that, in spite of apparent dissimilarities, the two theories are equivalent mathematically. The present discussion follows Schrödinger’s wave mechanics because it is less abstract and easier to understand than Heisenberg’s matrix mechanics.
Schrödinger’s wave mechanics
Schrödinger expressed Broglie’s hypothesis concerning the wave behaviour of matter in a mathematical form that is adaptable to a variety of physical problems without additional arbitrary assumptions. He was guided by a mathematical formulation of optics, in which the straight-line propagation of light rays can be derived from wave motion when the wavelength is small compared to the dimensions of the apparatus employed. In the same way, Schrödinger set out to find a wave equation for matter that would give particle-like propagation when the wavelength becomes comparatively small. According to classical mechanics, if a particle of mass me is subjected to a force such that its potential energy is V(xyz) at position xyz, then the sum of V(xyz) and the kinetic energy p2/2me is equal to a constant, the total energy E of the particle. Thus,
special composition for article 'Quantum Mechanics'
It is assumed that the particle is bound—i.e., confined by the potential to a certain region in space because its energy E is insufficient for it to escape. Since the potential varies with position, two other quantities do also: the momentum and, hence, by extension from the Broglie relation, the wavelength of the wave. Postulating a wave function Ψ(xyz) that varies with position, Schrödinger replaced p in the above energy equation with a differential operator that embodied the Broglie relation. He then showed that Ψ satisfies the partial differential equation
special composition for article 'Quantum Mechanics'
This is the (time-independent) Schrödinger wave equation, which established quantum mechanics in a widely applicable form. An important advantage of Schrödinger’s theory is that no further arbitrary quantum conditions need be postulated. The required quantum results follow from certain reasonable restrictions placed on the wave function—for example, that it should not become infinitely large at large distances from the centre of the potential.
Schrödinger applied his equation to the hydrogen atom, for which the potential function, given by classical electrostatics, is proportional to −e2/r, where −e is the charge on the electron. The nucleus (a proton of charge e) is situated at the origin, and r is the distance from the origin to the position of the electron. Schrödinger solved the equation for this particular potential with straightforward, though not elementary, mathematics. Only certain discrete values of E lead to acceptable functions Ψ. These functions are characterized by a trio of integers n, l, m, termed quantum numbers. The values of E depend only on the integers n (1, 2, 3, etc.) and are identical with those given by the Bohr theory. The quantum numbers l and m are related to the angular momentum of the electron; (l(l + 1))ℏ is the magnitude of the angular momentum, and mℏ is its component along some physical direction.
The square of the wave function, Ψ2, has a physical interpretation. Schrödinger originally supposed that the electron was spread out in space and that its density at point x, y, z was given by the value of Ψ2 at that point. Almost immediately Born proposed what is now the accepted interpretation—namely, that Ψ2 gives the probability of finding the electron at xyz. The distinction between the two interpretations is important. If Ψ2 is small at a particular position, the original interpretation implies that a small fraction of an electron will always be detected there. In Born’s interpretation, nothing will be detected there most of the time, but, when something is observed, it will be a whole electron. Thus, the concept of the electron as a point particle moving in a well-defined path around the nucleus is replaced in wave mechanics by clouds that describe the probable locations of electrons in different states.
Electron spin and antiparticles
In 1928 the English physicist Paul A.M. Dirac produced a wave equation for the electron that combined relativity with quantum mechanics. Schrödinger’s wave equation does not satisfy the requirements of the special theory of relativity because it is based on a nonrelativistic expression for the kinetic energy (p2/2me). Dirac showed that an electron has an additional quantum number ms. Unlike the first three quantum numbers, ms is not a whole integer and can have only the values +1/2 and −1/2. It corresponds to an additional form of angular momentum ascribed to a spinning motion. (The angular momentum mentioned above is due to the orbital motion of the electron, not its spin.) The concept of spin angular momentum was introduced in 1925 by Samuel A. Goudsmit and George E. Uhlenbeck, two graduate students at the University of Leiden, Neth., to explain the magnetic moment measurements made by Otto Stern and Walther Gerlach of Germany several years earlier. The magnetic moment of a particle is closely related to its angular momentum; if the angular momentum is zero, so is the magnetic moment. Yet Stern and Gerlach had observed a magnetic moment for electrons in silver atoms, which were known to have zero orbital angular momentum. Goudsmit and Uhlenbeck proposed that the observed magnetic moment was attributable to spin angular momentum.
The electron-spin hypothesis not only provided an explanation for the observed magnetic moment but also accounted for many other effects in atomic spectroscopy, including changes in spectral lines in the presence of a magnetic field (Zeeman effect), doublet lines in alkali spectra, and fine structure (close doublets and triplets) in the hydrogen spectrum.
The Dirac equation also predicted additional states of the electron that had not yet been observed. Experimental confirmation was provided in 1932 by the discovery of the positron by the American physicist Carl David Anderson. Every particle described by the Dirac equation has to have a corresponding antiparticle, which differs only in charge. The positron is just such an antiparticle of the negatively charged electron, having the same mass as the latter but a positive charge.
Identical particles and multielectron atoms
Because electrons are identical to (i.e., indistinguishable from) each other, the wave function of an atom with more than one electron must satisfy special conditions. The problem of identical particles does not arise in classical physics, where the objects are large-scale and can always be distinguished, at least in principle. There is no way, however, to differentiate two electrons in the same atom, and the form of the wave function must reflect this fact. The overall wave function Ψ of a system of identical particles depends on the coordinates of all the particles. If the coordinates of two of the particles are interchanged, the wave function must remain unaltered or, at most, undergo a change of sign; the change of sign is permitted because it is Ψ2 that occurs in the physical interpretation of the wave function. If the sign of Ψ remains unchanged, the wave function is said to be symmetric with respect to interchange; if the sign changes, the function is antisymmetric.
The symmetry of the wave function for identical particles is closely related to the spin of the particles. In quantum field theory (see below Quantum electrodynamics), it can be shown that particles with half-integral spin (1/2, 3/2, etc.) have antisymmetric wave functions. They are called fermions after the Italian-born physicist Enrico Fermi. Examples of fermions are electrons, protons, and neutrons, all of which have spin 1/2. Particles with zero or integral spin (e.g., mesons, photons) have symmetric wave functions and are called bosons after the Indian mathematician and physicist Satyendra Nath Bose, who first applied the ideas of symmetry to photons in 1924–25.
The requirement of antisymmetric wave functions for fermions leads to a fundamental result, known as the exclusion principle, first proposed in 1925 by the Austrian physicist Wolfgang Pauli. The exclusion principle states that two fermions in the same system cannot be in the same quantum state. If they were, interchanging the two sets of coordinates would not change the wave function at all, which contradicts the result that the wave function must change sign. Thus, two electrons in the same atom cannot have an identical set of values for the four quantum numbers n, l, m, ms. The exclusion principle forms the basis of many properties of matter, including the periodic classification of the elements, the nature of chemical bonds, and the behaviour of electrons in solids; the last determines in turn whether a solid is a metal, an insulator, or a semiconductor (see atom; matter).
The Schrödinger equation cannot be solved precisely for atoms with more than one electron. The principles of the calculation are well understood, but the problems are complicated by the number of particles and the variety of forces involved. The forces include the electrostatic forces between the nucleus and the electrons and between the electrons themselves, as well as weaker magnetic forces arising from the spin and orbital motions of the electrons. Despite these difficulties, approximation methods introduced by the English physicist Douglas R. Hartree, the Russian physicist Vladimir Fock, and others in the 1920s and 1930s have achieved considerable success. Such schemes start by assuming that each electron moves independently in an average electric field because of the nucleus and the other electrons; i.e., correlations between the positions of the electrons are ignored. Each electron has its own wave function, called an orbital. The overall wave function for all the electrons in the atom satisfies the exclusion principle. Corrections to the calculated energies are then made, which depend on the strengths of the electron-electron correlations and the magnetic forces.
Time-dependent Schrödinger equation
At the same time that Schrödinger proposed his time-independent equation to describe the stationary states, he also proposed a time-dependent equation to describe how a system changes from one state to another. By replacing the energy E in Schrödinger’s equation with a time-derivative operator, he generalized his wave equation to determine the time variation of the wave function as well as its spatial variation. The time-dependent Schrödinger equation reads
special composition for article 'Quantum Mechanics': Schrodinger equationThe quantity i is the square root of −1. The function Ψ varies with time t as well as with position xyz. For a system with constant energy, E, Ψ has the form
special composition for article 'Quantum Mechanics'where exp stands for the exponential function, and the time-dependent Schrödinger equation reduces to the time-independent form.
The probability of a transition between one atomic stationary state and some other state can be calculated with the aid of the time-dependent Schrödinger equation. For example, an atom may change spontaneously from one state to another state with less energy, emitting the difference in energy as a photon with a frequency given by the Bohr relation. If electromagnetic radiation is applied to a set of atoms and if the frequency of the radiation matches the energy difference between two stationary states, transitions can be stimulated. In a stimulated transition, the energy of the atom may increase—i.e., the atom may absorb a photon from the radiation—or the energy of the atom may decrease, with the emission of a photon, which adds to the energy of the radiation. Such stimulated emission processes form the basic mechanism for the operation of lasers. The probability of a transition from one state to another depends on the values of the l, m, ms quantum numbers of the initial and final states. For most values, the transition probability is effectively zero. However, for certain changes in the quantum numbers, summarized as selection rules, there is a finite probability. For example, according to one important selection rule, the l value changes by unity because photons have a spin of 1. The selection rules for radiation relate to the angular momentum properties of the stationary states. The absorbed or emitted photon has its own angular momentum, and the selection rules reflect the conservation of angular momentum between the atoms and the radiation.
The phenomenon of tunneling, which has no counterpart in classical physics, is an important consequence of quantum mechanics. Consider a particle with energy E in the inner region of a one-dimensional potential well V(x), as shown in Figure 1. (A potential well is a potential that has a lower value in a certain region of space than in the neighbouring regions.) In classical mechanics, if E < V0 (the maximum height of the potential barrier), the particle remains in the well forever; if E > V0, the particle escapes. In quantum mechanics, the situation is not so simple. The particle can escape even if its energy E is below the height of the barrier V0, although the probability of escape is small unless E is close to V0. In that case, the particle may tunnel through the potential barrier and emerge with the same energy E.
The phenomenon of tunneling has many important applications. For example, it describes a type of radioactive decay in which a nucleus emits an alpha particle (a helium nucleus). According to the quantum explanation given independently by George Gamow and by Ronald W. Gurney and Edward Condon in 1928, the alpha particle is confined before the decay by a potential of the shape shown in Figure 1. For a given nuclear species, it is possible to measure the energy E of the emitted alpha particle and the average lifetime τ of the nucleus before decay. The lifetime of the nucleus is a measure of the probability of tunneling through the barrier—the shorter the lifetime, the higher the probability. With plausible assumptions about the general form of the potential function, it is possible to calculate a relationship between τ and E that is applicable to all alpha emitters. This theory, which is borne out by experiment, shows that the probability of tunneling, and hence the value of τ, is extremely sensitive to the value of E. For all known alpha-particle emitters, the value of E varies from about 2 to 8 million electron volts, or MeV (1 MeV = 106 electron volts). Thus, the value of E varies only by a factor of 4, whereas the range of τ is from about 1011 years down to about 10−6 second, a factor of 1024. It would be difficult to account for this sensitivity of τ to the value of E by any theory other than quantum mechanical tunneling.
Axiomatic approach
Although the two Schrödinger equations form an important part of quantum mechanics, it is possible to present the subject in a more general way. Dirac gave an elegant exposition of an axiomatic approach based on observables and states in a classic textbook entitled The Principles of Quantum Mechanics. (The book, published in 1930, is still in print.) An observable is anything that can be measured—energy, position, a component of angular momentum, and so forth. Every observable has a set of states, each state being represented by an algebraic function. With each state is associated a number that gives the result of a measurement of the observable. Consider an observable with N states, denoted by ψ1, ψ2, . . . , ψN, and corresponding measurement values a1, a2, . . . , aN. A physical system—e.g., an atom in a particular state—is represented by a wave function Ψ, which can be expressed as a linear combination, or mixture, of the states of the observable. Thus, the Ψ may be written as
special composition for article 'Quantum Mechanics'For a given Ψ, the quantities c1, c2, etc., are a set of numbers that can be calculated. In general, the numbers are complex, but, in the present discussion, they are assumed to be real numbers.
The theory postulates, first, that the result of a measurement must be an a-value—i.e., a1, a2, or a3, etc. No other value is possible. Second, before the measurement is made, the probability of obtaining the value a1 is c12, and that of obtaining the value a2 is c22, and so on. If the value obtained is, say, a5, the theory asserts that after the measurement the state of the system is no longer the original Ψ but has changed to ψ5, the state corresponding to a5.
A number of consequences follow from these assertions. First, the result of a measurement cannot be predicted with certainty. Only the probability of a particular result can be predicted, even though the initial state (represented by the function Ψ) is known exactly. Second, identical measurements made on a large number of identical systems, all in the identical state Ψ, will produce different values for the measurements. This is, of course, quite contrary to classical physics and common sense, which say that the same measurement on the same object in the same state must produce the same result. Moreover, according to the theory, not only does the act of measurement change the state of the system, but it does so in an indeterminate way. Sometimes it changes the state to ψ1, sometimes to ψ2, and so forth.
There is an important exception to the above statements. Suppose that, before the measurement is made, the state Ψ happens to be one of the ψs—say, Ψ = ψ3. Then c3 = 1 and all the other cs are zero. This means that, before the measurement is made, the probability of obtaining the value a3 is unity and the probability of obtaining any other value of a is zero. In other words, in this particular case, the result of the measurement can be predicted with certainty. Moreover, after the measurement is made, the state will be ψ3, the same as it was before. Thus, in this particular case, measurement does not disturb the system. Whatever the initial state of the system, two measurements made in rapid succession (so that the change in the wave function given by the time-dependent Schrödinger equation is negligible) produce the same result.
The value of one observable can be determined by a single measurement. The value of two observables for a given system may be known at the same time, provided that the two observables have the same set of state functions ψ1, ψ2, . . . , ψN. In this case, measuring the first observable results in a state function that is one of the ψs. Because this is also a state function of the second observable, the result of measuring the latter can be predicted with certainty. Thus the values of both observables are known. (Although the ψs are the same for the two observables, the two sets of a values are, in general, different.) The two observables can be measured repeatedly in any sequence. After the first measurement, none of the measurements disturbs the system, and a unique pair of values for the two observables is obtained.
Incompatible observables
The measurement of two observables with different sets of state functions is a quite different situation. Measurement of one observable gives a certain result. The state function after the measurement is, as always, one of the states of that observable; however, it is not a state function for the second observable. Measuring the second observable disturbs the system, and the state of the system is no longer one of the states of the first observable. In general, measuring the first observable again does not produce the same result as the first time. To sum up, both quantities cannot be known at the same time, and the two observables are said to be incompatible.
A specific example of this behaviour is the measurement of the component of angular momentum along two mutually perpendicular directions. The Stern-Gerlach experiment mentioned above involved measuring the angular momentum of a silver atom in the ground state. In reconstructing this experiment, a beam of silver atoms is passed between the poles of a magnet. The poles are shaped so that the magnetic field varies greatly in strength over a very small distance (Figure 2). The apparatus determines the ms quantum number, which can be +1/2 or −1/2. No other values are obtained. Thus in this case the observable has only two states—i.e., N = 2. The inhomogeneous magnetic field produces a force on the silver atoms in a direction that depends on the spin state of the atoms. The result is shown schematically in Figure 3. A beam of silver atoms is passed through magnet A. The atoms in the state with ms = +1/2 are deflected upward and emerge as beam 1, while those with ms = −1/2 are deflected downward and emerge as beam 2. If the direction of the magnetic field is the x-axis, the apparatus measures Sx, which is the x-component of spin angular momentum. The atoms in beam 1 have Sx = +ℏ/2 while those in beam 2 have Sx = −ℏ/2. In a classical picture, these two states represent atoms spinning about the direction of the x-axis with opposite senses of rotation.
The y-component of spin angular momentum Sy also can have only the values +ℏ/2 and −ℏ/2; however, the two states of Sy are not the same as for Sx. In fact, each of the states of Sx is an equal mixture of the states for Sy, and conversely. Again, the two Sy states may be pictured as representing atoms with opposite senses of rotation about the y-axis. These classical pictures of quantum states are helpful, but only up to a certain point. For example, quantum theory says that each of the states corresponding to spin about the x-axis is a superposition of the two states with spin about the y-axis. There is no way to visualize this; it has absolutely no classical counterpart. One simply has to accept the result as a consequence of the axioms of the theory. Suppose that, as in Figure 3, the atoms in beam 1 are passed into a second magnet B, which has a magnetic field along the y-axis perpendicular to x. The atoms emerge from B and go in equal numbers through its two output channels. Classical theory says that the two magnets together have measured both the x- and y-components of spin angular momentum and that the atoms in beam 3 have Sx = +ℏ/2, Sy = +ℏ/2, while those in beam 4 have Sx = +ℏ/2, Sy = −ℏ/2. However, classical theory is wrong, because if beam 3 is put through still another magnet C, with its magnetic field along x, the atoms divide equally into beams 5 and 6 instead of emerging as a single beam 5 (as they would if they had Sx = +ℏ/2). Thus, the correct statement is that the beam entering B has Sx = +ℏ/2 and is composed of an equal mixture of the states Sy = +ℏ/2 and Sy = −ℏ/2—i.e., the x-component of angular momentum is known but the y-component is not. Correspondingly, beam 3 leaving B has Sy = +ℏ/2 and is an equal mixture of the states Sx = +ℏ/2 and Sx = −ℏ/2; the y-component of angular momentum is known but the x-component is not. The information about Sx is lost because of the disturbance caused by magnet B in the measurement of Sy.
Heisenberg uncertainty principle
The observables discussed so far have had discrete sets of experimental values. For example, the values of the energy of a bound system are always discrete, and angular momentum components have values that take the form mℏ, where m is either an integer or a half-integer, positive or negative. On the other hand, the position of a particle or the linear momentum of a free particle can take continuous values in both quantum and classical theory. The mathematics of observables with a continuous spectrum of measured values is somewhat more complicated than for the discrete case but presents no problems of principle. An observable with a continuous spectrum of measured values has an infinite number of state functions. The state function Ψ of the system is still regarded as a combination of the state functions of the observable, but the sum in equation (10) must be replaced by an integral.
Measurements can be made of position x of a particle and the x-component of its linear momentum, denoted by px. These two observables are incompatible because they have different state functions. The phenomenon of diffraction noted above illustrates the impossibility of measuring position and momentum simultaneously and precisely. If a parallel monochromatic light beam passes through a slit (Figure 4A), its intensity varies with direction, as shown in Figure 4B. The light has zero intensity in certain directions. Wave theory shows that the first zero occurs at an angle θ0, given by sin θ0 = λ/b, where λ is the wavelength of the light and b is the width of the slit. If the width of the slit is reduced, θ0 increases—i.e., the diffracted light is more spread out. Thus, θ0 measures the spread of the beam.
The experiment can be repeated with a stream of electrons instead of a beam of light. According to Broglie, electrons have wavelike properties; therefore, the beam of electrons emerging from the slit should widen and spread out like a beam of light waves. This has been observed in experiments. If the electrons have velocity u in the forward direction (i.e., the y-direction in Figure 4A), their (linear) momentum is p = meu. Consider px, the component of momentum in the x-direction. After the electrons have passed through the aperture, the spread in their directions results in an uncertainty in px by an amount
special composition for article 'Quantum Mechanics'where λ is the wavelength of the electrons and, according to the Broglie formula, equals h/p. Thus, Δpxh/b. Exactly where an electron passed through the slit is unknown; it is only certain that an electron went through somewhere. Therefore, immediately after an electron goes through, the uncertainty in its x-position is Δx ≈ b/2. Thus, the product of the uncertainties is of the order of ℏ. More exact analysis shows that the product has a lower limit, given by
special composition for article 'Quantum Mechanics': Heisenberg uncertainty principle
This is the well-known Heisenberg uncertainty principle for position and momentum. It states that there is a limit to the precision with which the position and the momentum of an object can be measured at the same time. Depending on the experimental conditions, either quantity can be measured as precisely as desired (at least in principle), but the more precisely one of the quantities is measured, the less precisely the other is known.
The uncertainty principle is significant only on the atomic scale because of the small value of h in everyday units. If the position of a macroscopic object with a mass of, say, one gram is measured with a precision of 10−6 metre, the uncertainty principle states that its velocity cannot be measured to better than about 10−25 metre per second. Such a limitation is hardly worrisome. However, if an electron is located in an atom about 10−10 metre across, the principle gives a minimum uncertainty in the velocity of about 106 metre per second.
The above reasoning leading to the uncertainty principle is based on the wave-particle duality of the electron. When Heisenberg first propounded the principle in 1927 his reasoning was based, however, on the wave-particle duality of the photon. He considered the process of measuring the position of an electron by observing it in a microscope. Diffraction effects due to the wave nature of light result in a blurring of the image; the resulting uncertainty in the position of the electron is approximately equal to the wavelength of the light. To reduce this uncertainty, it is necessary to use light of shorter wavelength—e.g., gamma rays. However, in producing an image of the electron, the gamma-ray photon bounces off the electron, giving the Compton effect (see above Early developments: Scattering of X-rays). As a result of the collision, the electron recoils in a statistically random way. The resulting uncertainty in the momentum of the electron is proportional to the momentum of the photon, which is inversely proportional to the wavelength of the photon. So it is again the case that increased precision in knowledge of the position of the electron is gained only at the expense of decreased precision in knowledge of its momentum. A detailed calculation of the process yields the same result as before (equation [12]). Heisenberg’s reasoning brings out clearly the fact that the smaller the particle being observed, the more significant is the uncertainty principle. When a large body is observed, photons still bounce off it and change its momentum, but, considered as a fraction of the initial momentum of the body, the change is insignificant.
The Schrödinger and Dirac theories give a precise value for the energy of each stationary state, but in reality the states do not have a precise energy. The only exception is in the ground (lowest energy) state. Instead, the energies of the states are spread over a small range. The spread arises from the fact that, because the electron can make a transition to another state, the initial state has a finite lifetime. The transition is a random process, and so different atoms in the same state have different lifetimes. If the mean lifetime is denoted as τ, the theory shows that the energy of the initial state has a spread of energy ΔE, given by
special composition for article 'Quantum Mechanics'
This energy spread is manifested in a spread in the frequencies of emitted radiation. Therefore, the spectral lines are not infinitely sharp. (Some experimental factors can also broaden a line, but their effects can be reduced; however, the present effect, known as natural broadening, is fundamental and cannot be reduced.) Equation (13) is another type of Heisenberg uncertainty relation; generally, if a measurement with duration τ is made of the energy in a system, the measurement disturbs the system, causing the energy to be uncertain by an amount ΔE, the magnitude of which is given by the above equation.
Quantum electrodynamics
The application of quantum theory to the interaction between electrons and radiation requires a quantum treatment of Maxwell’s field equations, which are the foundations of electromagnetism, and the relativistic theory of the electron formulated by Dirac (see above Electron spin and antiparticles). The resulting quantum field theory is known as quantum electrodynamics, or QED.
QED accounts for the behaviour and interactions of electrons, positrons, and photons. It deals with processes involving the creation of material particles from electromagnetic energy and with the converse processes in which a material particle and its antiparticle annihilate each other and produce energy. Initially the theory was beset with formidable mathematical difficulties, because the calculated values of quantities such as the charge and mass of the electron proved to be infinite. However, an ingenious set of techniques developed (in the late 1940s) by Hans Bethe, Julian S. Schwinger, Tomonaga Shin’ichirō, Richard P. Feynman, and others dealt systematically with the infinities to obtain finite values of the physical quantities. Their method is known as renormalization. The theory has provided some remarkably accurate predictions.
According to the Dirac theory, two particular states in hydrogen with different quantum numbers have the same energy. QED, however, predicts a small difference in their energies; the difference may be determined by measuring the frequency of the electromagnetic radiation that produces transitions between the two states. This effect was first measured by Willis E. Lamb, Jr., and Robert Retherford in 1947. Its physical origin lies in the interaction of the electron with the random fluctuations in the surrounding electromagnetic field. These fluctuations, which exist even in the absence of an applied field, are a quantum phenomenon. The accuracy of experiment and theory in this area may be gauged by two recent values for the separation of the two states, expressed in terms of the frequency of the radiation that produces the transitions:
Comparison of the experimental and theoretical values for the separation of two states of hydrogen.
An even more spectacular example of the success of QED is provided by the value for μe, the magnetic dipole moment of the free electron. Because the electron is spinning and has electric charge, it behaves like a tiny magnet, the strength of which is expressed by the value of μe. According to the Dirac theory, μe is exactly equal to μB = eℏ/2me, a quantity known as the Bohr magneton; however, QED predicts that μe = (1 + aB, where a is a small number, approximately 1/860. Again, the physical origin of the QED correction is the interaction of the electron with random oscillations in the surrounding electromagnetic field. The best experimental determination of μe involves measuring not the quantity itself but the small correction term μe − μB. This greatly enhances the sensitivity of the experiment. The most recent results for the value of a are
Comparison of the experimental and theoretical values of the magnetic dipole moment.
Since a itself represents a small correction term, the magnetic dipole moment of the electron is measured with an accuracy of about one part in 1011. One of the most precisely determined quantities in physics, the magnetic dipole moment of the electron can be calculated correctly from quantum theory to within about one part in 1010.
quantum mechanics
• MLA
• APA
• Harvard
• Chicago
You have successfully emailed this.
Error when sending the email. Try again later.
Edit Mode
Quantum mechanics
Table of Contents
Tips For Editing
Thank You for Your Contribution!
Uh Oh
Keep Exploring Britannica
quantum mechanics
Read this Article
Elementary Particles series. Interplay of abstract fractal forms on the subject of nuclear physics, science and graphic design. Quantum wave, quantum mechanics
Quantum Mechanics
Take this Science quiz at Encyclopedia Britannica to test your knowledge about quantum mechanics.
Take this Quiz
Read this Article
Read this Article
default image when no content is available
Vera Rubin
American astronomer who made groundbreaking observations that provided evidence for the existence of a vast amount of dark matter in the universe. The Swiss American astronomer Fritz Zwicky had in 1933...
Read this Article
acid–base reaction
Read this Article
Albert Einstein, c. 1947.
All About Einstein
Take this Science quiz at Encyclopedia Britannica to test your knowledge about famous physicist Albert Einstein.
Take this Quiz
Mária Telkes.
10 Women Scientists Who Should Be Famous (or More Famous)
Read this List
Zeno’s paradox, illustrated by Achilles racing a tortoise.
foundations of mathematics
Read this Article
Margaret Mead
Read this Article
Italian-born physicist Enrico Fermi explaining a problem in physics, c. 1950.
Physics and Natural Law
Take this physics quiz at encyclopedia britannica to test your knowledge on the different theories and principles of physics.
Take this Quiz
game theory
Read this Article
Email this page |
79e574a85d85e130 | Dismiss Notice
Join Physics Forums Today!
Paradox seemingly violating relativity
1. Sep 29, 2008 #1
i thought of a paradox that i'm not sure i can resolve for myself.
You have a 1-dim potential well of length L. You measure a particle to be within [tex]x=L/2\pm \sigma[/tex] with equal probability. Assume sigma is very tiny. After a short time [tex]\Delta t[/tex] the wavefunction has evolved to be non-zero even at [tex]x=L/3[/tex], say (with tiny probability). But if a measurement of the position of the particle actually yielded [tex]x=L/3\pm \sigma'[/tex], as unlikely it may be, wouldn't relativity be violated, in that the particle would have had to travel with velocity [tex] \frac{L/6}{\Delta t} [/tex], which for short enough [tex] \Delta t [/tex] might in fact be larger than the velocity of light?
My guess would be, that this is making too much of non-relativistic quantum mechanics and that it would all work out if instead of speaking about velocity v one talked about the momentum p, which might become as large as necessary to satisfy the uncertainty relation, without making the velocity larger than c (in the same way as the momentum of a constantly accelerated particle in relativity can become as large as necessary without the velocity increasing very much).
Is this the resolution? If yes, is it possible to make a more convincing argument for that?
If no, what is the resolution?
2. jcsd
3. Sep 29, 2008 #2
I would assume you are working at the microscopic scale where you must follow QM- HUP rules not Relativity. That would give you the freedom to consider “backwards time” in applying relativity, as in Feynman Diagrams, as long as any paradoxes are resolved or cancelled out before any result can reach the macro level.
4. Sep 29, 2008 #3
Vanadium 50
User Avatar
Staff Emeritus
Science Advisor
Education Advisor
2015 Award
Why do you assume that the time you call [tex]\Delta t[/tex] can be made arbitrarily small?
5. Sep 30, 2008 #4
Having measure the position the first time, the subsequent evolution of the wave is is not the original.
6. Sep 30, 2008 #5
Having once measured the position, the oringinal evolution of the wave is replaced by another that is dependent upon the results of the measurement.
7. Sep 30, 2008 #6
Hans de Vries
User Avatar
Science Advisor
You're guess is right. I presume you are talking about Schrödinger's equations which
indeed violates special relativity. It's a good enough approximation in many cases,
but insufficient in other ones. For instance: Gold would has a silver metallic color
according to Schrödingers equation. The relativistic Dirac equation predicts the right color.
Regards, Hans
8. Sep 30, 2008 #7
User Avatar
Staff Emeritus
Science Advisor
Gold Member
You are entirely right. This is simply because you use a non-relativistic hamiltonian here ; or in other words, the Green function of the Schroedinger equation is not limited within the lightcone. In quantum field theory, this is not so: the Green functions remain within the lightcone.
However, it is not the Schrodinger equation itself ( hbar/i d/dt psi = H psi) which is the cullprit, but rather the form of H, which is derived from non-relativistic mechanics.
The classical diffusion equation has the same problem.
9. Sep 30, 2008 #8
@RandallB I think I understand what you mean, if you say that certain processes, seemingly violating known laws, may arise, which then don't contribute in the end to what is effectively observed (Path Integral Formalism). But I don't think that the act of finding a particle at a place where it can not arrive by traveling at a velocity smaller c is one of those processes.
@Vanadium 50 if you have a wavefunction which has a rectangular shape (in the psi-x plane) in this potential well, at t=0, then generally it will have a non-zero value within this well for any t>0 (of course there might be nodes, but only a finite amount of them, so this does not disturb the general line of the argument).
@Phrak I agree that a measurement changes the initial wavefunction. But what I imagined is that at t=0 you happen to know that the particle is located in the interval [tex] x=L/2\pm \sigma [/tex] and using the Schrödinger equation you know how it will evolve. So you can find out at any time afterwards how the wave function looks (no intermediate measurement allowed, of course).
@Hans de Vries Wow, I hadn't heard of the discrepancy you mentioned about the shine of metal. Very impressive.
@vanesh I see how the classical diffusion equation has the same problem.
I guess I will have to study QFT soon :-)
Thanks a lot everyone!
10. Oct 3, 2008 #9
"You measure a particle to be within x = L/2 +/- sigma with equal probability."
How does one do this?
Have something to add? |
c52b5c89dc555f15 | Psychology Wiki
Many-minds interpretation
34,142pages on
this wiki
Revision as of 19:08, September 21, 2006 by Lifeartist (Talk | contribs)
The many-minds interpretation of quantum mechanics extends the many-worlds interpretation by proposing that the distinction between worlds should be made at the level of the mind of an individual observer. The concept was first introduced in 1970 by H. Dieter Zeh as a variant of the Hugh Everett interpretation in connection with quantum decoherence, and later (in 1981) explicitly called a Many-(or multi-)consciousness Interpretation. The name many-minds interpretation was first used by David Albert and B. Loewer in their 1988 work Interpreting the Many Worlds Interpretation.
The central problems Edit
1. Unitary evolution by the Schrödinger equation,
In the introduction to his paper, The Problem Of Conscious Observation In Quantum Mechanical Description (June, 2000) H. D. Zeh offered an empirical basis for connecting the processes involved in (2) with conscious observation:
"John von Neumann seems to have first clearly pointed out the conceptual difficulties that arise when one attempts to formulate the physical process underlying subjective observation within quantum theory. He emphasized the latter’s incompatibility with a psycho-physical parallelism, the traditional way of reducing the act of observation to a physical process. Based on the assumption of a physical reality in space and time, one either assumes a coupling (causal relationship — one-way or bidirectional) of matter and mind, or disregards the whole problem by retreating to pure behaviorism. However, even this may remain problematic when one attempts to describe classical behavior in quantum mechanical terms. Neither position can be upheld without fundamental modifications in a consistent quantum mechanical description of the physical world."
The Many-worlds Interpretation Edit
Main article: Many-worlds interpretation
Hugh Everett described a way out of this problem by suggesting that the universe is in fact indeterminate as a whole. That is, if you were to measure the spin of a particle and find it to be "up", in fact there are two "yous" after the measurement, one who measured the spin up, the other spin down. This relative state formulation, where all states (sets of measures) can only be measured relative to other such states, avoids a number of problems in quantum theory, including the original duality - no collapse takes place, the indeterminacy simply grows (or moves) to a larger system. Effectively by looking at the system in question, you take on its indeterminacy.
Everett claims that the universe has a quantum state, which he called the universal wavefunction, that always evolves according to the Schrödinger equation or some relativistic equivalent; now the measurement problem suggests the universal wavefunction will be in a superposition corresponding to many different definite macroscopic realms (`macrorealms'); that one can recover the subjective appearance of a definite macrorealm by postulating that all the various definite macrorealms are actual---`we just happen to be in one rather than the others' in the sense that "we" are in all of them, but each are mutally inobservable.
Continuous infinity of minds Edit
The idea of many minds was suggested early on by Zeh in 1995. He argues that in a decohering no-collapse universe one can avoid the necessity of macrorealms by introducing a new psycho-physical parallelism, in which individual minds supervene on each non-interfering component in the physical state. Zeh indeed suggests that, given decoherence, this is the most natural interpretation of quantum mechanics.
The main difference between `many minds' and `many worlds' interpretations then lies in the definition of the preferred quantity. The `many minds' interpretations suggests that to solve the measurement problem, there is no need to secure a definite macrorealm: the only thing that's required is appearance of such. A bit more precisely: the idea is that the preferred quantity is whatever physical quantity, defined on brains (or brains and parts of their environments), has definite-valued states (eigenstates) that underpin such appearances, i.e. underpin the states of belief in, or sensory experience of, the familiar macroscopic realm.
In its original version (related to decoherence), there is no process of selection. The process of quantum decoherence explains in terms of the Schrödinger equation how certain components of the universal wave function become irreversibly dynamically independent of one another (separate worlds - even though there is but one quantum world that does NOT split). These components may (each) contain definite quantum states of observers, while the total quantum state may not. These observer states may then be assumed to correspond to definite states of awareness (minds), just as in a classical description of observation. States of different observers are consistently entangled with one another, thus warranting objective results of measurements.
However Albert and Loewer suggest that the mental does not supervene on the physical, because individual minds have trans-temporal identity of their own. The mind selects one of these identities to be its non-random reality, while the universe itself is unaffected. The process for selection of a single state remains unexplained. This is particularly problematic because it is not clear how different observers would thus end up agreeing on measurements, which happens all the time here in the real world. There is assumed to be a sort of feedback between the mental process that leads to selection and the universal wavefunction, thereby effecting other mental states as a matter of course. In order to make the system work, the "mind" must be separate from the body, an old duality of philosophy to replace the new one of quantum mechanics.
In general this interpretation has received little attention, largely for this last reason.
Objections Edit
Objections that apply to the many-worlds interpretation also apply to the many-mind interpretations. On the surface both of these theories expressly violate Occam's Razor; proponents counter that in fact these solutions minimize entities by simplifying the rules that would be required to describe the universe.
Another serious objection is that workers in no collapse interpretations have produced no more than elementary models based on the definite existence of specific measuring devices. They have assumed, for example, that the Hilbert space of the universe splits naturally into a tensor product structure compatible with the measurement under consideration. They have also assumed, even when describing the behavior of macroscopic objects, that it is appropriate to employ models in which only a few dimensions of Hilbert space are used to describe all the relevant behavior.
In his ‘What is it like to be Schrödinger's cat?’ (2000), Peter J. Lewis argues that the many minds interpretation of quantum mechanics has absurd implications for agents facing life-or-death decisions.
In general, the many minds theory holds that a conscious being who observes the outcome of a random zero-sum experiment will evolve into two successors in different observer states, each of whom observes one of the possible outcomes. Moreover, the theory advises you to favor choices in such situations in proportion to the probability that they will bring good or bad results to your various successors. But in a life-or-death case like getting into the box with Schrödinger’s cat, you will only have one successor, since one of the outcomes will ensure your death. So it seems that the many minds interpretation advises you to get in the box with the cat, since it is certain that your only successor will emerge unharmed. Compare: Quantum suicide and Quantum immortality
See alsoEdit
External linksEdit
Around Wikia's network
Random Wiki |
3e56496c1be347ff | Bohr’s shell model
In 1913 Bohr proposed his quantized shell model of the atom (see Bohr atomic model) to explain how electrons can have stable orbits around the nucleus. The motion of the electrons in the Rutherford model was unstable because, according to classical mechanics and electromagnetic theory, any charged particle moving on a curved path emits electromagnetic radiation; thus, the electrons would lose energy and spiral into the nucleus. To remedy the stability problem, Bohr modified the Rutherford model by requiring that the electrons move in orbits of fixed size and energy. The energy of an electron depends on the size of the orbit and is lower for smaller orbits. Radiation can occur only when the electron jumps from one orbit to another. The atom will be completely stable in the state with the smallest orbit, since there is no orbit of lower energy into which the electron can jump.
• The Bohr atomThe electron travels in circular orbits around the nucleus. The orbits have quantized sizes and energies. Energy is emitted from the atom when the electron jumps from one orbit to another closer to the nucleus. Shown here is the first Balmer transition, in which an electron jumps from orbit n = 3 to orbit n = 2, producing a photon of red light with an energy of 1.89 eV and a wavelength of 656 nanometres.
The Bohr atom
Encyclopædia Britannica, Inc.
Bohr’s starting point was to realize that classical mechanics by itself could never explain the atom’s stability. A stable atom has a certain size so that any equation describing it must contain some fundamental constant or combination of constants with a dimension of length. The classical fundamental constants—namely, the charges and the masses of the electron and the nucleus—cannot be combined to make a length. Bohr noticed, however, that the quantum constant formulated by German physicist Max Planck has dimensions which, when combined with the mass and charge of the electron, produce a measure of length. Numerically, the measure is close to the known size of atoms. This encouraged Bohr to use Planck’s constant in searching for a theory of the atom.
Planck had introduced his constant in 1900 in a formula explaining the light radiation emitted from heated bodies. According to classical theory, comparable amounts of light energy should be produced at all frequencies. This is not only contrary to observation but also implies the absurd result that the total energy radiated by a heated body should be infinite. Planck postulated that energy can only be emitted or absorbed in discrete amounts, which he called quanta (Latin for “how much”). The energy quantum is related to the frequency of the light by a new fundamental constant, h. When a body is heated, its radiant energy in a particular frequency range is, according to classical theory, proportional to the temperature of the body. With Planck’s hypothesis, however, the radiation can be emitted only in quantum amounts of energy. If the radiant energy is less than the quantum of energy, the amount of light in that frequency range will be reduced. Planck’s formula correctly describes radiation from heated bodies. Planck’s constant has the dimensions of action, which may be expressed as units of energy multiplied by time, units of momentum multiplied by length, or units of angular momentum. For example, Planck’s constant can be written as h = 6.6 × 10−34 joule∙seconds.
In 1905 Einstein extended Planck’s hypothesis by proposing that the radiation itself can carry energy only in quanta. According to Einstein, the energy (E) of the quantum is related to the frequency (ν) of the light by Planck’s constant in the formula E = hν. Using Planck’s constant, Bohr obtained an accurate formula for the energy levels of the hydrogen atom. He postulated that the angular momentum of the electron is quantized—i.e., it can have only discrete values. He assumed that otherwise electrons obey the laws of classical mechanics by traveling around the nucleus in circular orbits. Because of the quantization, the electron orbits have fixed sizes and energies. The orbits are labeled by an integer, the quantum number n. In Bohr’s model, radius an of the orbit n is given by the formula an = h2n2ε02, where ε0 is the electric constant. As Bohr had noticed, the radius of the n = 1 orbit is approximately the same size as an atom.
Test Your Knowledge
Periodic table of the elements. Chemistry matter atom
Chemistry: Fact or Fiction?
With his model, Bohr explained how electrons could jump from one orbit to another only by emitting or absorbing energy in fixed quanta. For example, if an electron jumps one orbit closer to the nucleus, it must emit energy equal to the difference of the energies of the two orbits. Conversely, when the electron jumps to a larger orbit, it must absorb a quantum of light equal in energy to the difference in orbits.
Bohr’s model accounts for the stability of atoms because the electron cannot lose more energy than it has in the smallest orbit, the one with n = 1. The model also explains the Balmer formula for the spectral lines of hydrogen. The light energy is the difference in energies between the two orbits in the Bohr formula. Using Einstein’s formula to deduce the frequency of the light, Bohr not only explained the form of the Balmer formula but also explained accurately the value of the constant of proportionality R.
The usefulness of Bohr’s theory extends beyond the hydrogen atom. Bohr himself noted that the formula also applies to the singly ionized helium atom, which, like hydrogen, has a single electron. The nucleus of the helium atom has twice the charge of the hydrogen nucleus, however. In Bohr’s formula the charge of the electron is raised to the fourth power. Two of those powers stem from the charge on the nucleus; the other two come from the charge on the electron itself. Bohr modified his formula for the hydrogen atom to fit the helium atom by doubling the charge on the nucleus. Moseley applied Bohr’s formula with an arbitrary atomic charge Z to explain the K- and L-series X-ray spectra of heavier atoms. German physicists James Franck and Gustav Hertz confirmed the existence of quantum states in atoms in experiments reported in 1914. They made atoms absorb energy by bombarding them with electrons. The atoms would only absorb discrete amounts of energy from the electron beam. When the energy of an electron was below the threshold for producing an excited state, the atom would not absorb any energy.
Bohr’s theory had major drawbacks, however. Except for the spectra of X-rays in the K and L series, it could not explain properties of atoms having more than one electron. The binding energy of the helium atom, which has two electrons, was not understood until the development of quantum mechanics. Several features of the spectrum were inexplicable even in the hydrogen atom. High-resolution spectroscopy shows that the individual spectral lines of hydrogen are divided into several closely spaced fine lines. In a magnetic field the lines split even farther apart. German physicist Arnold Sommerfeld modified Bohr’s theory by quantizing the shapes and orientations of orbits to introduce additional energy levels corresponding to the fine spectral lines.
• Energy levels of the hydrogen atom, according to Bohr’s model and quantum mechanics using the Schrödinger equation and the Dirac equation.
Energy levels of the hydrogen atom, according to Bohr’s model and quantum mechanics using the …
Encyclopædia Britannica, Inc.
The quantization of the orientation of the angular momentum vector was confirmed in an experiment in 1922 by other German physicists, Otto Stern and Walther Gerlach. Their experiment took advantage of the magnetism associated with angular momentum; an atom with angular momentum has a magnetic moment like a compass needle that is aligned along the same axis. The researchers passed a beam of silver atoms through a magnetic field, one that would deflect the atoms to one side or another according to the orientation of their magnetic moments. In their experiment Stern and Gerlach found only two deflections, not the continuous distribution of deflections that would have been seen if the magnetic moment had been oriented in any direction. Thus, it was determined that the magnetic moment and the angular momentum of an atom can have only two orientations. The discrete orientations of the orbits explain some of the magnetic field effects—namely, the so-called normal Zeeman effect, which is the splitting of a spectral line into three separate subsidiary lines. These lines correspond to quantum jumps in which the angular momentum along the magnetic field is increased by one unit, decreased by one unit, or left unchanged.
Spectra in magnetic fields displayed additional splittings that showed that the description of the electrons in atoms was still incomplete. In 1925 Samuel Abraham Goudsmit and George Eugene Uhlenbeck, two graduate students in physics at the University of Leiden in the Netherlands, added a quantum number to account for the division of some spectral lines into more subsidiary lines than can be explained with the original quantum numbers. Goudsmit and Uhlenbeck postulated that an electron has an internal spinning motion and that the corresponding angular momentum is one-half of the orbital angular momentum quantum. Independently, Austrian-born physicist Wolfgang Pauli also suggested adding a two-valued quantum number for electrons, but for different reasons. He needed this additional quantum number to formulate his exclusion principle, which serves as the atomic basis of the periodic table and the chemical behaviour of the elements. According to the Pauli exclusion principle, one electron at most can occupy an orbit, taking into account all the quantum numbers. Pauli was led to this principle by the observation that an alkali metal atom in a magnetic field has a number of orbits in the shell equal to the number of electrons that must be added to make the next noble gas. These numbers are twice the number of orbits available if the angular momentum and its orientation are considered alone.
In spite of these modifications, by the early 1920s Bohr’s model seemed to be a dead end. It could not explain the number of fine spectral lines and many of the frequency shifts associated with the Zeeman effect. Most daunting, however, was its inability to explain the rich spectra of multielectron atoms. In fact, efforts to generalize the model to multielectron atoms had proved futile, and physicists despaired of ever understanding them.
• MLA
• APA
• Harvard
• Chicago
You have successfully emailed this.
Error when sending the email. Try again later.
Edit Mode
Table of Contents
Tips For Editing
Thank You for Your Contribution!
Uh Oh
Keep Exploring Britannica
Margaret Mead
quantum mechanics
Science Quiz
Periodic table of the elements. Chemistry matter atom
Chemistry: Fact or Fiction?
The depth range of different forms of ionizing radiation.
ionizing radiation
flow of energy in the form of atomic and subatomic particles or electromagnetic waves that is capable of freeing electrons from an atom, causing the atom to become charged (or ionized). Ionizing radiation...
game theory
5 Mysteries of Jupiter That Juno Might Solve
Mars rover. Mars Pathfinder. NASA. Sojourner.
10 Important Dates in Mars History
iceberg illustration.
Nature: Tip of the Iceberg Quiz
7 Important Dates in Jupiter History
Email this page |
0a3b4105416ebc48 |
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The fomula h/lambda is this for the photon only?
1. Feb 25, 2006 #1
the fomula [tex]\frac{h}{\lambda}[/tex]
is this for the photon only? or can it be applied to relativistic electrons too?
2. jcsd
3. Feb 25, 2006 #2
it applies to particles with zero rest mass. Hence it wont apply to relativistic electrons.
4. Feb 26, 2006 #3
so for relativistic electrons, if I wanted it's speed, i'd use .5mv^2?
5. Feb 26, 2006 #4
1/2 m v^2 only works for non-relatavistic speeds, the energy for a relatavistic particle is different. See here for more details.
6. Feb 26, 2006 #5
Same question was asked by de Brolie. And actually, it turned out that it will.
7. Feb 26, 2006 #6
Yes, but UrbanXrises' original formula was either a typo or assumed that c=1. With c=1 this formula is, in fact, only good for massless particles. DeBroglie's relationship involves the speed, which is less than c.
8. Feb 26, 2006 #7
No, it comes from:
[tex]E = pc = \frac{hc}{\lambda}[/tex]
where c's cancel, and de Broglie's equation relates momentum and wavelength.
9. Feb 26, 2006 #8
:redface: I was thinking of the energy equation. Sorry! (Ahem!)
Even though I got my c's wrong, the argument still holds...E=pc only hold for massless particles, which was what I was trying to say.
Last edited: Feb 26, 2006
10. Feb 26, 2006 #9
so [tex]p=\frac{h}{\lambda}[/tex] is for massless particles
but what about [tex]E=fh[/tex]?
is this equation for massless particles too?
11. Feb 26, 2006 #10
No. This equation is good for anything. Basically this equation simply expresses the quantizability of energy.
12. Feb 27, 2006 #11
No, it applies to all particles! That's the backbone for Schrödinger equation!
Have something to add?
Similar Discussions: The fomula h/lambda is this for the photon only? |
028ef8884111d023 | University has become an arms race: We must give students an alternative
Kristian Niemietz
THESE are not great times to graduate from university. Against the backdrop of a declining graduate premium (the difference between the earnings of graduates and non-graduates), university fees have shot up, and both the graduate unemployment rate and the share of graduates who have ended up in non-graduate jobs have increased (from 37 per cent in 2001 to 47 per cent today, according to the Office for National Statistics). Taken together, this means that university education is now considerably more costly for students, but it “buys” less in extra earnings and in facilitating graduates’ entry into the jobs market.
Does this mean Britain just has “too many” graduates? The UK has seen a large increase in the number of students in higher education over a relatively short period – from 1.9m in 2000-1 to 2.5m in 2011-12, according to the Higher Education Statistics Agency. In a sense, there are too many graduates. But the reason is not that young people are making the “wrong” choices. There is a more subtle mechanism at work.
Broadly speaking, higher education has always served two distinct purposes. The most obvious is that it allows students to acquire skills. But independent of this, degrees also have a signalling function. They show employers that the degree holder possesses, or is likely to possess, a set of (unobservable) characteristics which employers tend to value.
If an employer chooses an applicant with a physics degree for a job that requires no knowledge of physics whatsoever, we see this signalling function at work. The employer may not care about positrons or the Schrödinger equation, but they do care about the unobservable qualities (like a capacity for hard work) that have enabled the applicant to acquire such knowledge.
The rise in the number of graduates affects these two functions in different ways. As far as skill acquisition is concerned, more people graduating (from high standard courses) is a good thing. For those with a rare skill, an increase in the number of people possessing similar skills may be uncomfortable, since it increases competitive pressures. But these people also benefit from the fact that other skills have also become more widely available. Skills are complementary and mutually-reinforcing, so there are gains to be had from being part of a highly-skilled workforce, even if it means that no single “skill holder” is indispensable.
As far as the signalling function is concerned, however, an increase in the number of graduates is not unequivocally positive. This is because a university degree can have the properties of what economists call a “positional good”. Very simply, this means a good that people acquire in order to stand out from the crowd – to differentiate themselves from others. But a positional good loses its purpose, of course, if most members of that crowd also begin to acquire it. That is why the quest for positional goods can turn into an unproductive arms race.
As long as only a few people graduate, a job applicant with a university degree stands out from the crowd. But when most other applicants also have degrees, the degree loses that function. The signalling now works the other way round: not having a degree becomes a disadvantage.
Some people will feel compelled to go to university. They will not do so because they really want to, but because they think that, in order to avoid conveying that negative signal, they simply have to. As a result, too many young people end up doing a job they could have done without a degree. But worse, they’re saddled with student debt, and have suffered the opportunity cost of entering the job market several years later. These are the dynamics of an arms race.
In order to find a way out, we have to give those who demand university education primarily for its signalling effect a possibility to convey an equivalent signal at a much lower cost. In a lot of other countries, high-quality apprenticeships fulfil such a function, and there is no reason why it should not work that way here. But in order to get there, we have to make it a lot more attractive for companies to provide vocational training. This means we have to stop treating apprenticeships as mini-jobs. They are not. They are an alternative form of education. An hourly minimum wage of £2.68 for apprentices may not sound like much. But historically, apprentices used to pay for the training they received, which made apprenticeships much more attractive to employers.
An expansion in vocational education would give more people the chance to acquire the type of skills they actually want. As a nice side effect, it would also increase competitive pressures on universities. As anyone who has wrestled with a university bureaucracy knows, some could do with a good dose of that.
Related articles |
faf47b72d623fb5f | Eigenvalues and eigenvectors
From Wikipedia, the free encyclopedia
(Redirected from Eigenvalue)
Jump to: navigation, search
"Characteristic root" redirects here. For other uses, see Characteristic root (disambiguation).
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector that does not change its direction when that linear transformation is applied to it. More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V that is not the zero vector, then v is an eigenvector of T if T(v) is a scalar multiple of v. This condition can be written as the equation
If the vector space V is finite-dimensional, then the linear transformation T can be represented as a square matrix A, and the vector v by a column vector, rendering the above mapping as a matrix multiplication on the left hand side and a scaling of the column vector on the right hand side in the equation
There is a correspondence between n by n square matrices and linear transformations from an n-dimensional vector space to itself. For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations.[1][2]
Geometrically an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction that is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed.[3]
Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen for "proper", "inherent"; "own", "individual", "special"; "specific", "peculiar", or "characteristic".[4] Originally utilized to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization.
In essence, an eigenvector v of a linear transformation T is a non-zero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation
In this shear mapping the red arrow changes direction but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping because it doesn't change direction, and since its length is unchanged, its eigenvalue is 1.
• The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T.[7][8]
• If the set of eigenvectors of T form a basis of the domain of T, then this basis is called an eigenbasis.
Eigenvalues and eigenvectors of matrices[edit]
Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.[23][24] Furthermore, linear transformations can be represented using matrices,[1][2] which is especially common in numerical and computational applications.[25]
In this case λ = −1/20.
where, for each row,
Equation (1) can be stated equivalently as
where I is the n by n identity matrix.
Eigenvalues and the characteristic polynomial[edit]
Equation (2) has a non-zero solution v if and only if the determinant of the matrix (A − λI) is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation
The fundamental theorem of algebra implies that the characteristic polynomial of an n by n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms,
Taking the determinant of (M − λI), the characteristic polynomial of M is
Setting the characteristic polynomial equal to zero, it has roots at λ = 1 and λ = 3, which are the two eigenvalues of M. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation Mv = λv. In this example, the eigenvectors are any non-zero scalar multiples of
If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have non-zero imaginary parts. The entries of the corresponding eigenvectors therefore may also have non-zero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues are complex algebraic numbers.
Algebraic multiplicity[edit]
Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μAi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λ − λi)k divides evenly that polynomial.[8][26][27]
Suppose a matrix A has dimension n and dn distinct eigenvalues. Whereas Equation (4) factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can instead be written as the product d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity,
If d = n then the right hand side is the product of n linear terms and this is the same as Equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as
If μAi) = 1, then λi is said to be a simple eigenvalue.[27] If μAi) equals the geometric multiplicity of λi, γAi), defined in the next section, then λi is said to be a semisimple eigenvalue.
Eigenspaces, geometric multiplicity, and the eigenbasis for matrices[edit]
On one hand, this set is precisely the kernel or nullspace of the matrix (A − λI). On the other hand, by definition, any non-zero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (A − λI). E is called the eigenspace or characteristic space of A associated with λ.[7][8] In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of ℂn.
The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity γA(λ). Because E is also the nullspace of (A − λI), the geometric multiplicity of λ is the dimension of the nullspace of (A − λI), also called the nullity of (A − λI), which relates to the dimension and rank of (A - λI) as
The condition that γA(λ) ≤ μA(λ) can be proven by considering a particular eigenvalue ξ of A and diagonalizing the first γA(ξ) columns of A with respect to ξ's eigenvectors, described in a later section. The resulting similar matrix B is block upper triangular, with its top left block being the diagonal matrix ξIγA(ξ). As a result, the characteristic polynomial of B will have a factor of (ξ - λ)γA(ξ). The other factors of the characteristic polynomial of B are not known, so the algebraic multiplicity of ξ as an eigenvalue of B is no less than the geometric multiplicity of ξ as an eigenvalue of A. The last element of the proof is the property that similar matrices have the same characteristic polynomial.
Suppose A has dn distinct eigenvalues λ1, λ2, ..., λd, where the geometric multiplicity of λi is γAi). The total geometric multiplicity of A,
is the dimension of the union of all the eigenspaces of A's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of A. If γA = n, then
• The union of the eigenspaces of all of A's eigenvalues is the entire vector space ℂn
• A basis of ℂn can be formed from n linearly independent eigenvectors of A; such a basis is called an eigenbasis
• Any vector in ℂn can be written as a linear combination of eigenvectors of A
Additional properties of eigenvalues[edit]
Let A be an arbitrary n by n matrix of complex numbers with eigenvalues λ1, λ2, ..., λn. Each eigenvalue appears μAi) times in this list, where μAi) is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues:
• The determinant of A is the product of all its eigenvalues,
• The eigenvalues of the kth power of A, i.e. the eigenvalues of Ak, for any positive integer k, are λ1k, λ2k, ..., λnk.
• The matrix A is invertible if and only if every eigenvalue is nonzero.
• If A is invertible, then the eigenvalues of A−1 are 1/λ1, 1/λ2, ..., 1/λn and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity.
• If A is equal to its conjugate transpose A*, or equivalently if A is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix.
• If A is unitary, every eigenvalue has absolute value |λi| = 1.
Left and right eigenvectors[edit]
Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiples the n by n matrix A in the defining equation, Equation (1),
The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix A. In this formulation, the defining equation is
where κ is a scalar and u is a 1 by n matrix. Any row vector u satisfying this equation is called a left eigenvector of A and κ is its associated eigenvalue. Taking the conjugate transpose of this equation,
Comparing this equation to Equation (1), the left eigenvectors of A are the conjugate transpose of the right eigenvectors of A*. The eigenvalues of the left eigenvectors are the solution of the characteristic polynomial |A* − κ*I|=0. Because the identity matrix is Hermitian and |M*| = |M|* for a square matrix M, the eigenvalues of the left eigenvectors of A are the complex conjugates of the eigenvalues of the right eigenvectors of A. Recall that if A is a real matrix, all of its complex eigenvalues appear in complex conjugate pairs. Therefore, the eigenvalues of the left and right eigenvectors of a real matrix are the same. Similarly, if A is a real matrix, all of its complex eigenvectors also appear in complex conjugate pairs. Therefore, the left eigenvectors simplify to the transpose of the right eigenvectors of AT if A is real.
Diagonalization and the eigendecomposition[edit]
or by instead left multiplying both sides by Q−1,
Variational characterization[edit]
Main article: Min-max theorem
Matrix examples[edit]
Two-dimensional matrix example[edit]
The transformation matrix A = preserves the direction of vectors parallel to vλ=1 = [1 −1]T (in purple) and vλ=3 = [1 1]T (in blue). The vectors in red are not parallel to either eigenvector, so, their directions are changed by the transformation. See also: An extended version, showing all four quadrants.
Consider the matrix
Taking the determinant to find characteristic polynomial of A,
For λ = 1, Equation (2) becomes,
Any non-zero vector with v1 = −v2 solves this equation. Therefore,
For λ = 3, Equation (2) becomes
Any non-zero vector with v1 = v2 solves this equation. Therefore,
Three-dimensional matrix example[edit]
Consider the matrix
The characteristic polynomial of A is
The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors and , or any non-zero multiple thereof.
Three-dimensional matrix example with complex eigenvalues[edit]
Consider the cyclic permutation matrix
where i = is the imaginary unit.
For the real eigenvalue λ1 = 1, any vector with three equal non-zero entries is an eigenvector. For example,
For the complex conjugate pair of imaginary eigenvalues, note that
Therefore, the other two eigenvectors of A are complex and are and with eigenvalues λ2 and λ3, respectively. Note that the two complex eigenvectors also appear in a complex conjugate pair,
Diagonal matrix example[edit]
The characteristic polynomial of A is
Each diagonal element corresponds to an eigenvector whose only non-zero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors,
respectively, as well as scalar multiples of these vectors.
Triangular matrix example[edit]
Consider the lower triangular matrix,
The characteristic polynomial of A is
These eigenvalues correspond to the eigenvectors,
respectively, as well as scalar multiples of these vectors.
Matrix with repeated eigenvalues example[edit]
As in the previous example, the lower triangular matrix
On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector [0 1 -1 1]T and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector [0 0 0 1]T. The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section.
Eigenvalues and eigenfunctions of differential operators[edit]
Main article: Eigenfunction
Derivative operator example[edit]
Consider the derivative operator with eigenvalue equation
is the eigenfunction of the derivative operator. Note that in this case the eigenfunction is itself a function of its associated eigenvalue. In particular, note that for λ = 0 the eigenfunction f(t) is a constant.
The main eigenfunction article gives other examples.
General definition[edit]
We say that a non-zero vector vV is an eigenvector of T if and only if there exists a scalar λ ∈ K such that
This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. Note that T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v.[33]
Eigenspaces, geometric multiplicity, and the eigenbasis[edit]
Given an eigenvalue λ, consider the set
By definition of a linear transformation,
for (x,y) ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely (u,v) ∈ E, then
So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely (u+vv) ∈ E, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V.[8][34][35] If that subspace has dimension 1, it is sometimes called an eigenline.[36]
The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue.[8][27] By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector.
Zero vector as an eigenvector[edit]
Consider again the eigenvalue equation, Equation (5). Define an eigenvalue to be any scalar λ ∈ K such that there exists a non-zero vector vV satisfying Equation (5). It is important that this version of the definition of an eigenvalue specify that the vector be non-zero, otherwise by this definition the zero vector would allow any scalar in K to be an eigenvalue. Define an eigenvector v associated with the eigenvalue λ to be any vector that, given λ, satisfies Equation (5). Given the eigenvalue, the zero vector is among the vectors that satisfy Equation (5), so the zero vector is included among the eigenvectors by this alternate definition.
Spectral theory[edit]
Main article: Spectral theory
If λ is an eigenvalue of T, then the operator (T − λI) is not one-to-one, and therefore its inverse (T − λI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue.
For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (T − λI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them.
Associative algebras and representation theory[edit]
Dynamic equations[edit]
The simplest difference equations have the form
Main article: Eigenvalue algorithm
It turns out that any polynomial with degree is the characteristic polynomial of some companion matrix of order . Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods.
Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the advent of the QR algorithm in 1961. [39] Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm.[citation needed] For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities.[39]
we can find its eigenvectors by solving the equation , that is
This matrix equation is equivalent to two linear equations
that is
Both equations reduce to the single linear equation . Therefore, any vector of the form , for any non-zero real number , is an eigenvector of with eigenvalue .
The matrix above has another eigenvalue . A similar calculation shows that the corresponding eigenvectors are the non-zero solutions of , that is, any vector of the form , for any non-zero real number .
Eigenvalues of geometric transformations[edit]
scaling unequal scaling rotation horizontal shear hyperbolic rotation
illustration Equal scaling (homothety) Vertical shrink ('"`UNIQ--postMath-00000069-QINU`"') and horizontal stretch ('"`UNIQ--postMath-0000006A-QINU`"') of a unit square. Rotation by 50 degrees
Horizontal shear mapping
algebraic multipl.
geometric multipl.
eigenvectors All non-zero vectors
Note that the characteristic equation for a rotation is a quadratic equation with discriminant , which is a negative number whenever is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers, ; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane.
Schrödinger equation[edit]
Molecular orbitals[edit]
Geology and glaciology[edit]
Principal component analysis[edit]
Vibration analysis[edit]
Mode Shape of a Tuning Fork at Eigenfrequency 440.09 Hz
Main article: Vibration
where is the eigenvalue and is the (imaginary) angular frequency. Note that the principal vibration modes are different from the principal compliance modes, which are the eigenvectors of alone. Furthermore, damped vibration, governed by
leads to a so-called quadratic eigenvalue problem,
Eigenfaces as examples of eigenvectors
Main article: Eigenface
Tensor of moment of inertia[edit]
Stress tensor[edit]
Basic reproduction number[edit]
See also[edit]
1. ^ a b Herstein (1964, pp. 228,229)
2. ^ a b Nering (1970, p. 38)
3. ^ Burden & Faires (1993, p. 401)
4. ^ Betteridge (1965)
5. ^ Press (2007, p. 536)
6. ^ Wolfram Research, Inc. (2010) Eigenvector. Accessed on 2016-04-01.
7. ^ a b Anton (1987, pp. 305,307)
8. ^ a b c d e Nering (1970, p. 107)
9. ^ Note:
10. ^ See Hawkins 1975, §2
14. ^ See Kline 1972, p. 673
15. ^ See Kline 1972, pp. 715–716
16. ^ See Kline 1972, pp. 706–707
17. ^ See Kline 1972, p. 1063
18. ^ See:
19. ^ See Aldrich 2006
24. ^ University of Michigan Mathematics (2016) Math Course Catalogue. Accessed on 2016-03-27.
25. ^ Press (2007, pp. 38)
26. ^ Fraleigh (1976, p. 358)
27. ^ a b c Golub & Van Loan (1996, p. 316)
28. ^ a b Beauregard & Fraleigh (1973, p. 307)
29. ^ Herstein (1964, p. 272)
30. ^ Nering (1970, pp. 115–116)
31. ^ Herstein (1964, p. 290)
32. ^ Nering (1970, p. 116)
34. ^ Shilov 1977, p. 109
35. ^ Lemma for the eigenspace
36. ^ Schaum's Easy Outline of Linear Algebra, p. 111
43. ^ Stereo32 software
External links[edit]
Demonstration applets[edit] |
1c7c24d36c1bd7b7 | Notes to Causal Determinism
1. Some philosophers are misled on this point by the fact that some now-defunct presentations of Special Relativity theory seem to be grounded on an ontology of events. But Special Relativity does not need to be so presented, nor were the “events” used anything like common sense events.
2. The talk here of prediction is intuitive but sloppy. What we should say is: none of the states of the world before t = 0, conjoined with the laws of CM, entailed the appearance of the space invader at t = 0.
3. To create a cylindrical, spatially finite version of 2-D Newtonian space-time, one draws two vertical lines (e.g. x = 0 and x = a), cuts and throws away everything to the left of one and to the right of the other, and identifies the two lines, thus making space finite and closed. In this space-time setting, to power the space invader we can't use the non-collision mechanism of Gerver and Xia; see Earman (1986), p. 45.
4. Tachyons are hypothesized faster-than-light particles; there is no experimental basis for them, so ruling them out is no sin.
5. In GTR, a model system consists of a point manifold M on which a metric tensor gab is defined, as well as a stress-energy tensor Tab (which can be everywhere zero), jointly satisfying Einstein's field equations. The mathematical property of general covariance possessed by Einstein's equations entails that from one valid model system <M, g ab, Tab>, we can produce another model in which T ab and gab have been altered by the diffeomorphism h* : <M, h*gab, h*Tab>. This new model is also a valid solution of Einstein's equations, but it describes a model in which the location of the metrical structures and material fields of Tab and gab have been given different locations on the manifold. (They have been "shifted around", one might say, on the space-time manifold.) It is easy to construct a diffeomorphism h* that shifts the locations of T ab and gab only after some global time-slice t = 0 (at least, in models that admit such slices -- i.e., the GTR equivalent of our familiar “state of the world at time t = 0.”).
6. Quantum mechanics describes physical systems by means of states that are, notoriously, in some sense incomplete, a charge leveled against the theory by Einstein, Podolsky and Rosen in their famous 1935 essay “Can the Quantum Mechanical Description of Reality Be Considered Complete?”. This is most commonly illustrated by appeal to the Heisenberg Uncertainty Principle (in one of its forms): if the position of a particle is specified precisely, then its momentum must be unknown (i.e., described by a state such that the probabilities of the particle having a certain momentum are spread out over a wide range of possible values), and vice-versa. Quantum mechanics as normally interpreted says that these states are as complete a description as one can possibly get. A hidden variable theory however postulates the existence of determinate values for system-variables such as position and momentum, despite their being “hidden” from quantum mechanics itself. Einstein believed that ultimately we would find such a theory; David Bohm did precisely that, in 1952 (see below, main article).
7. There are theorems proving that for certain kinds of Hamiltonians (including most of those that may be considered physically realistic), the evolution of the wavefunction under the Schrödinger equation is deterministic. The theorems have other antecedent conditions that may be considered restrictive, however, and see J. Norton, "A Quantum Mechanical Supertask," Foundations of Physic, 29 (1999), 1265-1302 for a case in which determinism breaks down. [I thank an anonymous referee for drawing my attention to these points.]
8. In the 1980s the physicists Girardi, Rimini and Weber developed a revised version of QM that incorporates a physically well-defined collapse mechanism. Their theory solves certain interpretive problems in QM but has remaining difficulties; it is an indeterministic theory.
9. Here I am not referring to the time directions (toward-the-past, toward-the-future), which are certainly legitimate enough in physics and do sometimes play important roles. Rather, I am referring to our intuitive ontological division of history into the past, the present, and the future. “The present” in particular is not to be found in any physical theory's description of the world. And special relativity theory undermines the traditional conception of a non-observer-relative present (see Callender 2000).
Copyright © 2010 by
Carl Hoefer <>
Please Read How You Can Help Keep the Encyclopedia Free
|
71cf7227c027e707 | An off-line question from someone at Seed:
Fundamentally, what is the difference between chemistry and physics?
There are a bunch of different ways to try to explain the dividing lines between disciplines. My take on this particular question is that there’s a whole hierarchy of (sub)fields, based on what level of abstraction you work at. The question really has to do with what you consider the fundamental building block of the systems you study.
At the most fundamental level, you have particle physics and high-energy nuclear physics, which sees everything in terms of quarks and leptons, which are put together to form mesons and hadrons, including the protons and neutrons that we’re used to.
The next level up would be low-energy nuclear physics, which deals with protons and neutrons as the essential building blocks, and looks at how they’re put together to make nuclei. They don’t discard the quark model of nucleons, but it would be calculationally intractable to deal with the individual quarks, so they treat protons and neutrons as given (more or less), and look at how they are arranged.
Next up is atomic physics, which takes nuclei and electrons as given, and looks at how they’re put together to form atoms. We don’t really worry that much about how the protons and neutrons are arranged in the nucleus, save for where that affects how the electrons are arranged (in things like the hyperfine structure of atoms, which depends on the nuclear spin).
Next is where you start to make the transition between physics and chemistry. This is the level of the overlapping fields of molecular physics and physical chemistry, which takes atoms as the essential particles and looks at how they fit together to make simple molecules. They don’t worry about the nuclei at all, really, and only a little bit about the electrons. It’s a tricky division to make, but if I had to make a stab at defining the essential difference between molecular physics and small-molecule chemistry, I would say that the physics side is mostly concerned with how small molecules are put together and how they stay together, while chemists are more interested in how small molecules react with each other and swap pieces back and forth.
The next level is what most people think of when you say “chemistry,” which is dealing with complex molecules. Here, the fundamental entities are groups of atoms– hydroxyl this and ester that and sulfide and azide and all the rest. They look at how small molecules are put together to form large ones.
From here, you’ve got two different branches, but the same scale-based hierarchy continues. If you consider big molecules as your basic units, and look at how they combine in small numbers to make more complicated structures, then you’re getting into biochemistry. The next level is cell biology, then you get into the study of whole organisms, and eventually into neuroscience and psychology and on into things that aren’t really science any more.
On the other branch, if you pack enough molecules of the same type together, it goes back to being physics, in the condensed matter/ solid state regime. There, you treat huge numbers of nuclei and electrons in a statistical sort of way– you take the bulk structure as a given, and ask how the electrons are, on average, distributed through the system. The fundamental units in this case are collections of vast numbers of atoms and electrons.
And, of course, if you go to a large enough condensed matter system, it just becomes mechanics, in which you treat huge agglomerations of atoms and molecules as solid objects, and look at their motion in response to bulk forces from other huge agglomerations of atoms. When the solid objects become big enough, it becomes astronomy, and then cosmology.
That’s my personal Grand Unified Theory of the sciences, anyway.
1. #1 Elia Diodati
December 3, 2007
“They don’t worry about the nuclei at all, really, and only a little bit about the electrons.”
Oh come on, give us a litle more credit than that. The average chemist may cringe at the mention of a Born-Oppenheimeer wavefunction, but rest assured physical chemists are very familiar with that kind of thing. And nowadays, quite a few physical chemists are also familiar with non-adiabatic corrections to the Born-Oppenheimer approximation. There’s been a lot of work in recent years in conical intersections and the effects of Berry’s phase in reaction dynamics.
To turn the table around, I bet a lot of physicists would run away from composing wavefunctions out of Slater determinants, and the ones who wouldn’t would be more likely to sneer at the thought and rather work with fancy Grassmann fields anyway.
2. #2 Elia Diodati
December 3, 2007
Also, a great many chemists are familiar with spectroscopic techniques such as NMR, EPR, Raman scattering and the like as tools for extracting information about molecular structures. It is rather disingenuous to think that chemists could use such techniques without at least a decent backgound in AMO physics.
3. #3 Clark
December 3, 2007
That’s essentially what I’ve been saying on this topic for years, so you are obviously correct!
4. #4 KevinC
December 3, 2007
Don’t worry about the electrons? That is about all I worried about in organic chemistry, were are the electrons going and what are they doing.
5. #5 NJ
December 3, 2007
And, of course, if you go to a large enough condensed matter system, it just becomes mechanics… When the solid objects become big enough, it becomes astronomy, and then cosmology.
Ya left out the geology step in the middle there, Chad.
6. #6 Jonathan Vos Post
December 3, 2007
I mostly agree with #1 on BO being at the Physics/Chemistry boundary, B.O. also being associated with the classical statement that Physics sparks but Chemistry stinks.
As wikipedia begins:
In the first step the nuclear kinetic energy is neglected, that is, the corresponding operator T_n is subtracted from the total molecular Hamiltonian. In the remaining electronic Hamiltonian H_e the nuclear positions enter as parameters. The electron-nucleus interactions are not removed and the electrons still “feel” the Coulomb potential of the nuclei clamped at certain positions in space. (This first step of the BO approximation is therefore often referred to as the clamped nuclei approximation.)
In the second step of the BO approximation the nuclear kinetic energy T_n (containing partial derivatives with respect to the components of R) is reintroduced and the Schrödinger equation for the nuclear motion… is solved. This second step of the BO approximation involves separation of vibrational, translational, and rotational motions. This can be achieved by application of the Eckart conditions. The eigenvalue E is the total energy of the molecule, including contributions from electrons, nuclear vibrations, and overall rotation and translation of the molecule.
The exceptions to Born-Oppenheimer are themselves interesting, including some genuine Chemistry on surfaces, and some molecular biology.
7. #7 Chad Orzel
December 3, 2007
I’m not claiming that organic chemists don’t know anything about nuclear structure or atomic spectroscopy– that would be foolish. The claim is that their primary concern is with things at a higher level of abstraction.
It’s the same with my own field of atomic physics. There are plenty of people working in atomic physics who have an excellent understanding of nuclear physics. I’ve worked with several of them. Their primary concern as atomic physicists though, is with the arrangement of the electrons in the atoms, and not with how the protons and neutrons are arranged in the nucleus, except insofar as that affects things like the hyperfine structure and internuclear interactions.
Atomic physicists should certainly know something about nuclear physics, in the same way that nuclear physicists should know something about spectroscopy, and particle physicists should know something about band structure. But the study of atomic physics can be defined, roughly, as being that part of physics that cares primarily about the study of individual atoms and their behavior.
NJ: Ya left out the geology step in the middle there, Chad.
Good point.
It’s mechanics, then geology, then planetary science, then astronomy, then cosmology.
8. #8 Jonathan Vos Post
December 3, 2007
The Phys/Chem boundary is not a straight line, but either fuzzy or fractal.
Where, Chad, do you put exotic atoms (i.e. hadrons other than protons or neutrons in the nucleus), muonium, positronium? And where do they go in the Periodic Table? The recently detected molecule dipositronium belongs to which discipline?
Without details, I hereby invoke Saint Linus Pauling for defining Chermistry.
9. #9 Uncle Al
December 3, 2007
A physicist hands you a paper describing the answer. A chemist hands you a vial containing the answer. (A social advocate demands legislation condemning the question.)
When did anthracene ever have so much fun in a short reaction sequence?
Phys. Rev. 134 A1416-A1424 (1964) says it should be a room temp supercon (when doped). Easy to make via ADMET. Easy to derivatize as a lyotropic liquid crystal for spinning into miles of supercon wire. Shouldn’t somebody do the chemistry and find out?
10. #10 Ben M
December 3, 2007
Jonathan@8: rather than adding epicycles to the model, we can set aside anything hard-to-categorize and call it “interdisciplinary”. Sic transit astroparticle physics, cosmochemistry, plasma physics, exotic atoms, biogeochemistry, and so on.
11. #11 milkshake
December 3, 2007
In my field, organic chemistry, most people usually do not have to calculate anything. It is not that they would be lazy but it is extremely difficult to model things accurately. It is sufficient in most cases to use qualitative explanations, to get the work done. There is a computational branch, people working on chemical calculations of simplified systems ab initio but it is very hard thing and not very useful – real life problems get out of hand quickly. Just taking in account the influence of a solvent is non-trivial. And even the “ab intio” calculations lots of things are introduced by hand, the MO sets used for calculations are chosen on empirical considerations and so on.
With spectroscopic methods: most chemists don’t build the instruments, they just learn how to use them. The spectroscopists have to understand whats going on inside and if they are really good, how to program a new puls sequence for NMR. These people are just few, and their position is more like doing experimental physics as a specialised service for chemist.
12. #12 HennepinCountyLawyer
December 3, 2007
My daughter asked me for help with her high school chemistry. I looked at the book chapter–it was about what used to be called “valences” but when I was in high school were “oxygen numbers” but apparently are “valences” again–and very early on they started talking about quantum numbers. Well I had heard of them because I read a lot of nontechnical books about physics, but they’re certainly not anything they taught us in high school chemistry OR physics.
If the public schools are dumbing down the curriculum as so many people are complaining, why do my kids’ textbooks always make me feel so stupid?
13. #13 Drekab
December 3, 2007
I wanted to mention geology as well, but I’d put it a little lower down, chemistry splitting to biology on one side and geology on the other. Looking at it this way, physics is a huge field, it touches on just about every other science. Usually I descibe other sciences in this scale based way, and physics as the study of energy. But its been a while since I was ‘almost’ a physicist.
14. #14 Cry for me
December 3, 2007
So hypothetically speaking. If a theorist/computationalist (with chemistry degrees) was trying to get employed at a SLAC to teach and pursue research in molecular physics and quantum optics with an eye towards single-molecule spectroscopy and photochemistry, should they apply to physics or chemistry departments? The ‘hierarchy’ is killing ‘em.
15. #15 Chad Orzel
December 3, 2007
Hypothetically, I would say that if you would be comfortable teaching physics (especially intro physics), you should go ahead and apply to both. It doubles your hypothetical options.
If you wouldn’t be comfortable teaching physics classes, hypothetically, then only apply to chemistry departments.
16. #16 Caledonian
December 3, 2007
I would say that chemistry is a specialization in a narrow subset of physics, emphasizing empirically-gained knowledge about the properties of matter.
17. #17 Dr. Free-Ride
December 3, 2007
18. #18 chezjake
December 3, 2007
Just having fun here. ;-)
Offhand, I’d say that, with the exception of fluid mechanics, physics tends to be a dry science. Chemistry is the wet science; biology is the wet and messy science.
19. #19 Grad
December 3, 2007
Caledonian: ah yes, don’t forget to preach from your lofty physics vantage to all us less scientists. You’d do Lord Kelvin proud.
20. #20 Waterdog
December 3, 2007
Re: Caledonian’s comment. My high school physics and chemistry teachers (both taught AP, but I dropped out of the AP chem half-way through and switched to regular chem) had an ongoing rivalry. There were two related articles in one issue of the school newspaper, written by each of them: The Top Ten Reasons Chemistry is Better Than Physics; The Top Ten Reasons Physics is Better Than Chemistry.
One item from each list stands out, though I can’t recall the exact phrasing. The chem teacher said that chemistry was the central science, and therefore physics was just a branch of chemistry; the phys teacher said that physics was the fundamental science, and therefore chemistry was just an example of physics. They were (are) both great teachers, and it was a great article.
Anyway, it’s true that chemistry historically began as an empirical science with no real understanding of an underlying explanatory framework, though it did make predictions based on laws: generalizations of observed regularities. The categorization of certain types of substances, i.e., acids and bases and their reactions with each other and other substances, Mendeleev’s version of the Periodic Table, which predicted undiscovered elements, etc.
It’s easy to say now that the underlying theory can be explained in terms of physics, but that doesn’t mean that physicists can retroactively take credit for the huge body of empirical data and laws in chemistry’s history and the modern work being churned out, just because some physical discoveries have turned out to be relevant to our understanding of how chemical reactions work. The connections between different disciplines gives us hope that we’re on the right track, putting together a consistent view of the universe, but we wouldn’t have that without people working at all levels of scale. It’s just like paleontology and evolutionary biology. Both fed into each other but paleontology hasn’t been subsumed into the other science.
21. #21 milkshake
December 3, 2007
There was a story about young Teller visiting a chemistry group in Chicago just when they were trying to figure out the best isolation technique for obtaining the minuscule traces of plutonium from irradiated material. As they were testing various precipitation techniques to see if Pu would concentrate in the precipitate, Teller exasperated them by claiming that he could calculate any concievable chemistry problem from the first principles…
22. #22 Elia Diodati
December 3, 2007
I happen to be enrolled in a chemical physics PhD program and I am still trying to figure out the difference between chemical physics and physical chemistry.
I think Dr. Free-Ride has the best answer so far: the difference is cultural, not scientific. Both chemistry and physics have progressed so much since 1926 with the advent of quantum theory that it is really, really hard to draw a line in the sand today. It is therefore unfortunate (and unproductive) that the vast majority of physics and chemistry are still taught from the trenches of 19th century knowledge.
Some recent advances (in addition to spectroscopy and non-adiabatic dynamics that I’ve mentioned above) that I think muddy the sand further are the techniques of quantum chemistry (ab initio wavefunction-based methods), density functional theory, atoms-in-molecules theory (a la Bader) based on quantum mechanics with open boundary conditions, and terahertz spectroscopy.
It seems that when it comes to DFT, there is a *very* interesting twist on how differently chemists and physicists apply DFT. DFT practitioners seem to be divided between Kohn-Sham methods and plane-wave methods, which seem to fall along partisan lines of chemistry vs. physics. Just ask a DFT practitioner about RKS/B3LYP/6-31+G** or Bethe-Salpeter equations; chances are either one will provide you with about 5 seconds of blank stares.
And then there are the “conceptual DFT” people who associate strongly with the seminal book of Parr and Yang (1995) and the review article by Geerlings, De Proft and Langenaeker Chem. Rev.; 2003; 103(5) pp 1793 – 1874, that try to axomatize and make mathematically rigorous the chemical concepts of electronegativity (electron donating/withdrawing tendencies of atoms), chemical hardness, and the like.
I’d be happy to elaborate, if anyone else is interested.
23. #23 Elia Diodati
December 3, 2007
Another cultural note: I’ve noticed that what passes for physical chemistry in the US is often considered the turf of atomic and molecular physicists in Europe. I’m thinking of spectroscopy and computational quantum mechanics in particular. Not sure what to make of that.
24. #24 Elia Diodati
December 4, 2007
In reference to #20, the earliest quote to that effect that I know of can be attributed to Dirac, which goes like “the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that that the exact application of these laws leads to equations much too complicated to be solvable.”
If Teller could really predict precipitation reactions from first principles, then we should make copies of his brain as soon as possible, and the entire community of computational chemists should promptly start looking for new careers.
25. #25 David Marjanović, OM
December 4, 2007
“Chemie ist das, was knallt und stinkt, Physik ist das, was nie gelingt.”
“Chemistry is that which bangs and stinks, physics is that which one can never succeed [in getting it to work].” German proverb referring to school experiments.
26. #26 David Marjanović
December 4, 2007
Over here (University of Vienna), spectroscopy counts as analytical chemistry…
27. #27 Caledonian
December 4, 2007
Ah, the delicate ego of the insecure.
Physics encompasses the study of all properties of matter; chemistry is necessarily a specialization within it. But the impracticality (and in many cases current impossibility) of deriving chemical predictions from first principles is what makes chemistry so distinct as a discipline unto itself. As opposed to, say, optics. Optics is also a subfield within the larger category of physics, but is simple enough that it never needed to reach the level of specialization required for chemistry.
Just as aerodynamics should theoretically be derivable from quantum mechanics, and in reality includes lots and lots of empirically-derived findings that we can’t derive from the basic physics, lots of chemistry cannot be derived from the basic physics in practice.
The real issue is that we’re comparing a lesser category with a greater one, which is like comparing apples and roast turkeys.
28. #28 Barry
December 4, 2007
# 26 | Caledonian | December 4, 2007 8:58 AM
Another: “Caledonian: ah yes, don’t forget to preach from your lofty physics vantage to all us less scientists. You’d do Lord Kelvin proud.”
Caledonian: “Ah, the delicate ego of the insecure.”
Do I smell a physicist?
Caledonian: “Physics encompasses the study of all properties of matter; chemistry is necessarily a specialization within it.”
That’s what physicists say; that’s not what they do. They still manage to leave 90-odd percent of the study of ‘the proporties of matter’ to other fields. It’s almost as if they *couldn’t* deal with such things.
However, I’m sure that Caledonian will assure us all that it’s really the case that physicists *could*, but don’t want to lower themselves.
29. #29 Drebin
August 15, 2008
To your comment on Born-Oppenheimeer wavefunction. A very intelligent Physicist will moan at the mention of “Lobry-de Bruyn-van-Alberda-van-Ekenstein transformation” in carbohydrate chemistry.LOL.
I love physics, maybe not as much as I love Chemistry, But there are no hard Distinctions between the objectives of both sciences in understanding Matter.
Let both physics and chemists team up and bully the Biologist. xD.
30. #30 Jonathan Vos Post
August 15, 2008
“What’s the Difference Between Physics and Chemistry?”
Much Chemistry (because of history, convenience, cost, and us as “made of meat”) is in aqueous solution, which limits the temperature range. Again, I say “much” rather than all, as colliding molecular beams for Femtochemistry indicate. But Physics happily spans a range from nanokelvin to Planck temperature. Too hot for molecules? Hell, we like it too hot for atoms, too hot for protons, too hot to distinguish one force from another.
Also different: the iconic Science Fiction authors. Chemistry has Isaac Asimov and Harry Stubbs (Hal Clement). Physics has Dave Brin, Greg Benford, Catherine Asaro, San Schmidt, and Greg Egan. I’ve left out many other related names, as well as the concentration of Astronomy experts in Science Fiction, Engineering (harkening back to the Hugo Gernsback radio-hacker era, Arthur C. Clarke, et al), and Biology (including the brilliantly self-taught such as Greg Bear). |
547454a478a32bb1 | Phase space formulation
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
The conceptual ideas underlying the development of quantum mechanics in phase space have branched into mathematical offshoots such as algebraic deformation theory (cf. Kontsevich quantization formula) and noncommutative geometry.
Phase space distribution[edit]
The phase space distribution f(x,p) of a quantum state is a quasiprobability distribution. In the phase space formulation, the phase-space distribution may be treated as the fundamental, primitive description of the quantum system, without any reference to wave functions or density matrices.[7]
There are several different ways to represent the distribution, all interrelated.[8][9] The most noteworthy is the Wigner representation, W(x,p), discovered first.[4] Other representations (in approximately descending order of prevalence in the literature) include the Glauber-Sudarshan P,[10][11] Husimi Q,[12] Kirkwood-Rihaczek, Mehta, Rivier, and Born-Jordan representations.[13][14] These alternatives are most useful when the Hamiltonian takes a particular form, such as normal order for the Glauber–Sudarshan P-representation. Since the Wigner representation is the most common, this article will usually stick to it, unless otherwise specified.
If Â(x,p) is an operator representing an observable, it may be mapped to phase space as A(x, p) through the Wigner transform. Conversely, this operator may be recovered via the Weyl transform.
The expectation value of the observable with respect to the phase space distribution is[2][15]
A point of caution, however: despite the similarity in appearance, W(x,p) is not a genuine joint probability distribution, because regions under it do not represent mutually exclusive states, as required in the third axiom of probability theory. Moreover, it can, in general, take negative values even for pure states, with the unique exception of (optionally squeezed) coherent states, in violation of the first axiom.
An alternative phase space approach to quantum mechanics seeks to define a wave function (not just a quasiprobability density) on phase space, typically by means of the Segal–Bargmann transform. To be compatible with the uncertainty principle, the phase space wave function cannot be an arbitrary function, or else it could be localized into an arbitrarily small region of phase space. Rather, the Segal–Bargmann transform is a holomorphic function of . There is a quasiprobability density associated to the phase space wave function; it is the Husimi Q representation of the position wave function.
Star product[edit]
The differential definition of the star product is
The energy eigenstate distributions are known as stargenstates, -genstates, stargenfunctions, or -genfunctions, and the associated energies are known as stargenvalues or -genvalues. These are solved fin, analogously to the time-independent Schrödinger equation, by the -genvalue equation,[17][18]
Time evolution[edit]
or, for the Wigner function in particular,
This yields a concise illustration of the correspondence principle: this equation manifestly reduces to the classical Liouville equation in the limit ħ → 0. In the quantum extension of the flow, however, the density of points in phase space is not conserved; the probability fluid appears "diffusive" and compressible.[2] The concept of quantum trajectory is therefore a delicate issue here. (Given the restrictions placed by the uncertainty principle on localization, Niels Bohr vigorously denied the physical existence of such trajectories on the microscopic scale. By means of formal phase-space trajectories, the time evolution problem of the Wigner function can be rigorously solved using the path-integral method[20] and the method of quantum characteristics,[21] although there are practical obstacles in both cases.)
Simple harmonic oscillator[edit]
The Wigner quasiprobability distribution Fn(u) for the simple harmonic oscillator with a) n = 0, b) n = 1, and c) n = 5.
The -genvalue equation for the static Wigner function then reads
(Click to animate.)
Consider first the imaginary part of the -genvalue equation.
Free particle angular momentum[edit]
Morse potential[edit]
The Wigner function time-evolution of the Morse potential U(x) = 20(1 − e−0.16x)2 in atomic units (a.u.). The solid lines represent level set of the Hamiltonian H(x, p) = p2/2 + U(x).
Quantum tunneling[edit]
The Wigner function for tunneling through the potential barrier U(x) = 8e−0.25x2 in atomic units (a.u.). The solid lines represent the level set of the Hamiltonian H(x, p) = p2/2 + U(x).
Quartic potential[edit]
The Wigner function time evolution for the potential U(x) = 0.1x4 in atomic units (a.u.). The solid lines represent the level set of the Hamiltonian H(x, p) = p2/2 + U(x).
Schrödinger cat state[edit]
Wigner function of two interfering coherent states evolving through the SHO Hamiltonian. The corresponding momentum and coordinate projections are plotted to the right and under the phase space plot.
1. ^ a b c d H.J. Groenewold, "On the Principles of elementary quantum mechanics", Physica,12 (1946) pp. 405–460. doi:10.1016/S0031-8914(46)80059-4
2. ^ a b c d e J.E. Moyal, "Quantum mechanics as a statistical theory", Proceedings of the Cambridge Philosophical Society, 45 (1949) pp. 99–124. doi:10.1017/S0305004100000487
3. ^ H.Weyl, "Quantenmechanik und Gruppentheorie", Zeitschrift für Physik, 46 (1927) pp. 1–46, doi:10.1007/BF02055756
4. ^ a b E.P. Wigner, "On the quantum correction for thermodynamic equilibrium", Phys. Rev. 40 (June 1932) 749–759. doi:10.1103/PhysRev.40.749
5. ^ S. T. Ali, M. Engliš, "Quantization Methods: A Guide for Physicists and Analysts." Rev.Math.Phys., 17 (2005) pp. 391-490. doi:10.1142/S0129055X05002376
6. ^ a b Curtright, T. L.; Zachos, C. K. (2012). "Quantum Mechanics in Phase Space". Asia Pacific Physics Newsletter. 01: 37. arXiv:1104.5269Freely accessible. doi:10.1142/S2251158X12000069.
8. ^ Cohen, L. (1966). "Generalized Phase-Space Distribution Functions". Journal of Mathematical Physics. 7 (5): 781–781. Bibcode:1966JMP.....7..781C. doi:10.1063/1.1931206.
12. ^ Kôdi Husimi (1940). "Some Formal Properties of the Density Matrix", Proc. Phys. Math. Soc. Jpn. 22: 264-314 .
16. ^ G. Baker, “Formulation of Quantum Mechanics Based on the Quasi-probability Distribution Induced on Phase Space,” Physical Review, 109 (1958) pp.2198–2206. doi:10.1103/PhysRev.109.2198
17. ^ Fairlie, D. B. (1964). "The formulation of quantum mechanics in terms of phase space functions". Mathematical Proceedings of the Cambridge Philosophical Society. 60 (3): 581. Bibcode:1964PCPS...60..581F. doi:10.1017/S0305004100038068.
18. ^ a b Curtright, T.; Fairlie, D.; Zachos, C. (1998). "Features of time-independent Wigner functions". Physical Review D. 58 (2). arXiv:hep-th/9711183Freely accessible. Bibcode:1998PhRvD..58b5002C. doi:10.1103/PhysRevD.58.025002.
20. ^ M. S. Marinov, A new type of phase-space path integral, Phys. Lett. A 153, 5 (1991).
21. ^ M. I. Krivoruchenko, A. Faessler, Weyl's symbols of Heisenberg operators of canonical coordinates and momenta as quantum characteristics, J. Math. Phys. 48, 052107 (2007) doi:10.1063/1.2735816.
22. ^ Curtright, T.L. Time-dependent Wigner Functions |
f1048ef7077bda1b | This is a well known problem (see for instance this survey) in the area of “quantum chaos” or “quantum unique ergodicity”; I am attracted to it both for its simplicity of statement (which I will get to eventually), and also because it focuses on one of the key weaknesses in our current understanding of the Laplacian, namely is that it is difficult with the tools we know to distinguish between eigenfunctions (exact solutions to -\Delta u_k = \lambda_k u_k) and quasimodes (approximate solutions to the same equation), unless one is willing to work with generic energy levels rather than specific energy levels.
The Bunimovich stadium \Omega is the name given to any planar domain consisting of a rectangle bounded at both ends by semicircles. Thus the stadium has two flat edges (which are traditionally drawn horizontally) and two round edges, as this picture from Wikipedia shows:
Bunimovich stadium - Wikipedia
Despite the simple nature of this domain, the stadium enjoys some interesting classical and quantum dynamics. The classical dynamics, or billiard dynamics on \Omega is ergodic (as shown by Bunimovich) but not uniquely ergodic. In more detail: we say the dynamics is ergodic because a billiard ball with randomly chosen initial position and velocity (as depicted above) will, over time, be uniformly distributed across the billiard (as well as in the energy surface of the phase space of the billiard). On the other hand, we say that the dynamics is not uniquely ergodic because there do exist some exceptional choices of initial position and velocity for which one does not have uniform distribution, namely the vertical trajectories in which the billiard reflects orthogonally off of the two flat edges indefinitely.
Rather than working with (classical) individual trajectories, one can also work with (classical) invariant ensembles – probability distributions in phase space which are invariant under the billiard dynamics. Ergodicity then says that (at a fixed energy) there are no invariant absolutely continuous ensemble other than the obvious one, namely the probability distribution with uniformly distributed position and velocity direction. On the other hand, unique ergodicity would say the same thing but dropping the “absolutely continuous” – but each vertical bouncing ball mode creates a singular invariant ensemble along that mode, so the stadium is not uniquely ergodic.
Now from physical considerations we expect the quantum dynamics of a system to have similar qualitative properties as the classical dynamics; this can be made precise in many cases by the mathematical theories of semi-classical analysis and microlocal analysis. The quantum analogue of the dynamics of classical ensembles is the dynamics of the Schrödinger equation i\hbar \partial_t \psi + \frac{\hbar^2}{2m} \Delta \psi = 0, where we impose Dirichlet boundary conditions (one can also impose Neumann conditions if desired, the problems seem roughly the same). The quantum analogue of an invariant ensemble is a single eigenfunction -\Delta u_k = \lambda_k u_k, which we normalise in the usual L^2 manner, so that \int_\Omega |u_k|^2 = 1. (Due to the compactness of the domain \Omega, the set of eigenvalues \lambda_k of the Laplacian -\Delta is discrete and goes to infinity, though there is some multiplicity arising from the symmetries of the stadium. These eigenvalues are the same eigenvalues that show up in the famous “can you hear the shape of a drum?” problem.) Roughly speaking, quantum ergodicity is then the statement that almost all eigenfunctions are uniformly distributed in physical space (as well as in the energy surface of phase space), whereas quantum unique ergodicity (QUE) is the statement that all eigenfunctions are uniformly distributed. In particular:
• If quantum ergodicity holds, then for any open subset A \subset \Omega we have \int_A |u_k|^2 \to |A|/|\Omega| as \lambda_k \to \infty, provided we exclude a set of exceptional k of density zero.
• If quantum unique ergodicity holds, then we have the same statement as before, except that we do not need to exclude the exceptional set.
(In fact, quantum ergodicity and quantum unique ergodicity say somewhat stronger things than the above two statements, but I would need tools such as pseudodifferential operators to describe these more technical statements, and so I will not do so here.)
Now it turns out that for the stadium, quantum ergodicity is known to be true; this specific result was first obtained by Gérard and Leichtman, although “classical ergodicity implies quantum ergodicity” results of this type go back to Schnirelman (see also Zelditch and Colin de Verdière). These results are established by microlocal analysis methods, which basically proceed by aggregating all the eigenfunctions together into a single object (e.g. a heat kernel, or some other function of the Laplacian) and then analysing the resulting aggregate semiclassically. It is because of this aggregation that one only gets to control almost all eigenfunctions, rather than all eigenfunctions. Here is a picture of a typical eigenfunction for the stadium (from Douglas Stone’s page):
Typical stadium eigenfunction
In analogy to the above theory, one generally expects classical unique ergodicity should correspond to QUE. For instance, there is the famous (and very difficult) quantum unique ergodicity conjecture of Rudnick and Sarnak, which asserts that QUE holds for all compact manifolds without boundary with negative sectional curvature. This conjecture will not be discussed here (it would warrant an entire post in itself, and I would not be the best placed to write it). Instead, we focus on the Bunimovich stadium. The stadium is clearly not classically uniquely ergodic due to the vertical bouncing ball modes, and so one would conjecture that it is not QUE either. In fact one conjectures the slightly stronger statement:
Indeed, one expects to take A to be a union of vertical bouncing ball trajectories (from Egorov’s theorem (in microlocal analysis, not the one in real analysis), this is almost the only choice). This type of failure of QUE even in the presence of quantum ergodicity has already been observed for some simpler systems, such as the Arnold cat map. Some further discussion of this conjecture can be found here. Here are some pictures from Arnd Bäcker‘s page of some eigenfunctions (displaying just one quarter of the stadium to save space) which seem to exhibit scarring:
Scarring eigenfunctions
Of course, each of these eigenfunctions has a fixed finite energy, and so these numerics do not directly establish the scarring conjecture, which is a statement about the asymptotic limit as the energy becomes infinite.
One reason this conjecture appeals to me (apart from all the gratuitous pretty pictures one can mention while discussing it) is that there is a very plausible physical argument, due to Heller and refined by Zelditch, which indicates the conjecture is almost certainly true. Roughly speaking, it runs as follows. Using the rectangular part of the stadium, it is easy to construct (high-energy) quasimodes of order 0 which scar (concentrate on a proper subset A of \Omega) – roughly speaking, these are solutions u to an approximate eigenfunction equation -\Delta u = (\lambda + O(1)) u for some \lambda. For instance, if the two horizontal edges of the stadium lie on the lines y=0 and y=1, then one can take u(x,y) = \varphi(x) \sin(\pi n y) and \lambda = \pi^2 n^2 for some large integer n and some suitable bump function \varphi. Using the spectral theorem, one expects u to concentrate its energy in the band {}[\pi^2 n^2 - O(1), \pi^2 n^2 + O(1)]. On the other hand, in two dimensions the Weyl law for distribution of eigenvalues asserts that the eigenvalues have an average spacing comparable to 1. If (and this is the non-rigorous part) this average spacing also holds on a typical band {}[\pi^2 n^2 - O(1), \pi^2 n^2 + O(1)], this shows that the above quasimode is essentially generated by only O(1) eigenfunctions. Thus, by the pigeonhole principle (or more precisely, Pythagoras’ theorem), at least one of the eigenfunctions must exhibit scarring.
[Update, Mar 28: As Greg Kuperberg pointed out, I oversimplified the above argument. The quasimode is so weak that the eigenfunctions that comprise it could in fact spread out (as per the uncertainty principle) and fill out the whole stadium. However, if one looks in momentum space rather than physical space, the scarring of the quasimode is so strong that it must persist to one of the eigenfunctions, leading to failure of QUE even if this may not quite be detectable purely in the physical space sense described above.]
The big gap in this argument is that nobody knows how to take the Weyl law (which is proven by the microlocal analysis approach, i.e. aggregate all the eigenstates together and study the combined object) and localise it to such an extremely sparse set of narrow energy bands. (Using the standard error term in Weyl’s law one can localise to bands of width O(n) around, say, \pi^2 n^2, and by using the ergodicity one can squeeze this down to o(n), but to even get control on a band of with width O(n^{1-\epsilon}) would require a heroic effort (analogous to establishing a zero-free region \{ s: \hbox{Re}(s) > 1-\epsilon\} for the Riemann zeta function). The enemy is somehow that around each energy level \pi^2 n^2, a lot of exotic eigenfunctions spontaneously appear, which manage to dissipate away the bouncing ball quasimodes into a sea of quantum chaos. This is exceedingly unlikely to happen, but we do not seem to have tools available to rule it out.
One indication that the problem is not going to be entirely trivial is that one can show (basically by unique continuation or control theory arguments) that no pure eigenfunction can be solely concentrated within the rectangular portion of the stadium (where all the vertical bouncing ball modes are); a significant portion of the energy must leak out into the two “wings” (or at least into arbitrarily small neighbourhoods of these wings). This was established by Burq and Zworski.
On the other hand, the stadium is a very simple object – it is one of the simplest and most symmetric domains for which we cannot actually compute eigenfunctions or eigenvalues explicitly. It is tempting to just discard all the microlocal analysis and just try to construct eigenfunctions by brute force. But this has proven to be surprisingly difficult; indeed, despite decades of sustained study into the eigenfunctions of Laplacians (given their many applications to PDE, to number theory, to geometry, etc.) we still do not know very much about the shape and size of any specific eigenfunction for a general manifold, although we know plenty about the average-case behaviour (via microlocal analysis) and also know the worst-case behaviour (by Sobolev embedding or restriction theorem type tools). This conjecture is one of the simplest conjectures which would force us to develop a new tool for understanding eigenfunctions, which could then conceivably have a major impact on many areas of analysis.
One might consider modifying the stadium in order to make scarring easier to show, for instance by selecting the dimensions of the stadium appropriately (e.g. obeying a Diophantine condition), or adding a potential or magnetic term to the equation, or perhaps even changing the metric or topology. To have even a single rigorous example of a reasonable geometric operator for which scarring occurs despite the presence of quantum ergodicity would be quite remarkable, as any such result would have to involve a method that can deal with a very rare set of special eigenfunctions in a manner quite different from the generic eigenfunction.
Actually, it is already interesting to see if one can find better quasimodes than the ones listed above which exhibit scarring, i.e. to improve the O(1) error in the spectral bandwidth. My good friend Maciej Zworski has offered a dinner in a good French restaurant for this precise problem, as well as a dinner in a very good French restaurant for the full scarring conjecture. (While I may not know as many three-star restaurants as Maciej, I can certainly offer a nice all-expenses-paid trip to sunny Los Angeles for anyone who achieves a breakthrough on any of the open problems listed here. ;-) ). |
5e3ff09d384a9281 | Monday, May 28, 2018
Simplicity in photographic art.
“Monday Blues Chat”
By Erin Photography
Matthew Rapaport said...
Thanks Dr. H. I'm looking forward to it. Of course physicists use the term in a technical way only tangentially connected to beauty as the term is commonly used and like the question "what is goodness?" or "what is truth?" Beauty in its common usage is a slippery concept. But as I've noted before, like the other two, beauty is a VALUE and its slippery nature stems from our very vague recognition of what constitutes those values
CapitalistImperialistPig said...
The epigraph to Chandrasekhar's Mathematical Theory of Black Holes has quotes from Heisenberg and Bacon on the subject. I don't have the volume at hand, but they go something like this:
Heisenberg: Beauty consists of the proper proportion of the parts to the whole, and to each other.
Bacon: There is no thing of excellent beauty which hath not some strangeness in the proportion.
neo said...
where does loop quantum gravity score on aspects of beauty?
Thomas said...
Every time I read something like "the laws of nature are beautiful?" someone want to sell a new book to the public :-(
Bill said...
A beautiful equation is also one that exhibits the fewest free parameters while explaining the most physics. That's why general relativity is beautiful while the Lagrangian of the Standard Model is ugly as hell. They both work, one by itself and the other by brute force, although I would never compare one with the other.
Looking forward to purchasing your book!
Uncle Al said...
Simple, natural, elegant: The answer exits a printer after a one-hour observation.
... 1) Baryogenesis is post-Big Bang excess matter over antimatter violating conservation laws via selective leakage.
... 2) Sakharov conditions. Vacuum is neither exactly mirror-symmetric nor exactly isotropic toward quarks then hadrons.
... 3) Einstein-Cartan-Kibble-Sciama spacetime torsion chiral dopant.
... 4) Milgrom acceleration and the cosmological constant emerge. Dark matter, SUSY, and M-theory wither.
... 5) Extreme opposite shoes embed within a vacuum left foot with measurably different energies.
... 6) Measure spacetime trace chiral anisotropy ̶ one hour in a microwave spectrometer, 40,000:1 signal to noise, using molecular lollipop enantiomers.
Enantiomeric balls ( short sticks (2-CN group for dipole moment) Divergent rotational spectra.
Enrico said...
I only recognize simplicity of the Occam's razor kind. Make only a few assumptions that can be tested empirically. Naturalness is just modern numerology of the ancient Pythagoreans. (They believed the square root of two is evil) Elegance is epistemology because no scientific theory explains itself. They explain observations. We can make theories that explain themselves or anything except the observable universe. Theoretical physicists are envious of mathematicians because their imaginations can fly without constraint from the physical world. It's flight-of-fantasy envy
Sabine Hossenfelder said...
Something's wrong with the comment feature at blogger - I'm not getting comment notifications, meaning I basically don't know if anyone submitted a comment until I check the website. If anyone has an idea what's the issue, please let me know. For the rest, may I kindly ask for your patience. I'll be traveling today but will look for a fix once back home.
Space Time said...
To me, as a mathematician, beauty in physics/science is something that is almost impossible to describe and define but easy to tell when you see it. It is also very subjective and different people may disagree. for instance general relativity is beautiful, modified gravity is not. The orthodox quantum mechanics is beautiful, Bohmian mechanics is not, and so on.
Uncle Al said...
arxiv:1712.07969 k. How can that be empirically furiously wrong?
XENON1T, 1300 kg active liquid xenon target of total 2000 kg. ZERO net output.
XENONnT, 5200 kg active liquid xenon target of total 7500 kg. 2019 launch.
Zero signal crashes physical theory. Simple, natural, elegant: The math is rigorous but empirically irrelevant. It's a curve fit. It's phlogiston.
Newton becomes special relativity given Maxwell. Lightspeed is unchanged from all reference frames. What rubbish! No, it's true. Baryogenesis happened. Conservation laws are inexact. What rubbish! Is it true?
"There is no reason to look. Physical theory cannot be fundamentally defective." Nothing predicts. Simple, natural, elegant: Look at the answer not at its guesses.
M. J. Glaeser said...
Dr Hossenfelder:
How do the following score in beauty,in your view?
The C*-algebraic version of LQG (see LOST theorem).
The spectral derivation of the standard model (see Connes's work).
Isham and Döring's topos-theoretic formulation of the Kochen-Specker Theorem.
Geroch's Einstein algebras
Lawrence Crowell said...
Beauty is not something we can easily codify. Much of this has to do with induction nature of proposing a grand theory. There is no deductive structure to proposing some set of physical axioms or postulates as the most economical and elegant foundation to the universe. Beauty is a relative of “quality” and as Pirsig wrote in Zen in the Art of Motorcycle Maintenance is not something that can be codified.
A good theory has a limited number of physical axioms that upon their introduction change how we think and that are then by deductive reasoning able to lead to a large set of expected results. Such theories are often considered to be beautiful, elegant and natural. Simplicity comes with the limited number of postulates, elegance is in the new structures proposed, and naturalness comes with the relative ease with which physical predictions occur.
One thing that can happen of course is a very beautiful and elegant theory can be wrong. Supersymmetry is an intertwine between the boson and fermion structures of quantum mechanics and the structure of spacetime. It also short circuits the Coleman-Mandula “no-go” theorem on obstructions to unification of internal symmetries of gauge fields with the external symmetry of gravitation. This is a big problem with Lisi's program that includes the SL(2,C) of gravitation. Supersymmetry is though a framework more than an exact description of nature, and one must “hang” a model of gauge bosons and fermions on it. This has been the case with minimally supersymmetric standard model (MSSM). Interestingly MSSM is on the verge of being falsified, and many thousands of papers on this topic, including those by luminaries as Gordon Kane, may be completely trashed. The question is whether this is a failure of supersymmetry or a particular model.
Maybe supersymmetry occurs in ways completely different from what has been thought. This is my thesis. I frankly welcome the prospect that MSSM is falsified; it clears the decks for small players like me and many others. Whether following beauty has been the downfall or not is not clear to me. Obviously people tried to work light mass supersymmetric partners with the standard model. The standard model is not considered to be the most elegant theory out there, but it sure works like a top for TeV scale physics. The MSSM was built because it was what seemed most reasonable at the time, and it had some level of beauty to it. It though appears to be headed for the trash heap.
sean s. said...
You may want to check your spam folder; sometimes a setting gets broken and many things go there. I've seen it happen to others.
Travel safely.
sean s.
The Universe said...
Beautiful mathematics is absolutely no substitute for understanding. Dirac was the epitome of that. He had absolutely no understanding of the electron, but didn't care, and he even ignored the likes of Gustav Mie and Charles Galton Darwin. In fact, seeing as his 1962 paper an extensible model of the electron depicted the electron as a charged conducting sphere, I'd go so far as to say beauty is dangerous.
As regards comment moderation, I notice that I can't comment using my wordpress id. It's The Universe (Google Account) or nothing. You could always try turning comment moderation off.
John Duffield
Sabine Hossenfelder said...
Hi all,
Regarding the comment issue, turns out it's not a problem with my blog, but a blogger-wide issue that will supposedly be fixed next week or such. (See forum thread.) So rather than switching to a different comment widget (which would remove all existing comments), I'll wait this out. Please be warned that this means for the coming week comments will appear even slower than usual.
milkshake said...
I think the elegance aspect has to do with the sparseness of the description and its predictive power - one can get far more phenomena explained and flowing out without fudging than was put in; and preferably it happens in a way that is non-obvious and startling
marten said...
My professor in mathematics used to qualify beautiful equations as horny, because such equations are stimulating the faculty's survival.
Space Time said...
John Duffield,
Dirac seems to be an example of exactly the opposite. Beauty was certainly a very important motivation for his work (one can argue it was the only one). And his contribution to physics is undeniable.
Sabine, if you could have interviewed Dirac, would you have had a different view about beauty?
MartinB said...
I think one additional aspect (or may be you include this with the "Surprise element" under elegance) is that beautiful theories use non-intuitive concepts to explain everyday experience.
Even Newton is rather non-intuitive (compared to Aristoteles).
GR explaining things falling down by time running slower close to a mass gives a totally weird-seeming explanation.
Explanatory power is also important. I remember that I definitely did not find Maxwell's equation beautiful in any way when I first saw them. I appreciated their beauty only after seeing how you can derive things like em-waves from them and how the different terms in the equation conspire to make em-waves possible. So one other aspect of beauty may be only apparent when you find that the equations are easy to operate with and reveal a rich structure of possible things to derive from them. (Complexity from simplicity.)
Sabine Hossenfelder said...
Space Time,
I don't know what you mean. Would I have had a different view about beauty than Dirac? Presumably. Or a different view than presently? Probably not. Or else, I don't know what you mean.
Uncle Al said...
@The Universe Otto Stern’s measured proton magnetic moment showed the Dirac equation is empirically wrong for composite particles. Nobel Prize.
Stern's value was poor but sufficiently far from Dirac's calculated value. Current proton-antiproton values 1.5 ppb diverge re baryogenesis. One hour in a microwave rotational spectrometer measures overall vacuum chiral anisotropy toward hadrons, falsifying simple, natural, elegant.
.... 2.792847350(9)μ_N proton
... -2.7928473441(42)μ_N antiproton
... DOI::10.1038/nature24048
t h ray said...
Well and compactly said.
Space Time,
" ... general relativity is beautiful, modified gravity is not. The orthodox quantum mechanics is beautiful, Bohmian mechanics is not, and so on."
Huh? You're speaking as a mathematician?
Unknown said...
Mathematics is required , but the way it is done to do physics for funds and just survival is wrong. One can go to the screen in " The Man who knew infinity " where Professor Hardy tells S Ramanujan probably in the hospital " I want rigor Ramanujan" when Ramanujan writes the problem and just the right solution without the steps. Ramanujan responds well , he gives rigor ti his solutions with Prof hardy's suggestion. Mathematics can be used purposefully in physics if there is rigor.
Even in engineering and experimental work many do not report error bars, even in high ranking journals which can be done only when they do the experiments atleast thrice. The rush to publish is the primary cause for this.
Patat Je said...
I think you mean that the collapse postulate is removed, and then splitting is added later. Splitting is metaphorical. Splitting never happens. You don't need to know when or where a universe splits. The Schrödinger equation describes everything.
David Bailey said...
Surely the idea of beauty, as applied to a physical theory, only makes sense if it is assumed that you are looking at the fundamental theory. Unless that is the case, ugly equations are the norm.
For example, I was stunned as a teenager by the gas law equation PV=nRT - then I learned it is only an approximation, and more accurate, but vastly uglier equations do better!
Is physics at the depth to find a fundamental theory - who knows, but I'll bet every generation thinks it is!
Space Time said...
"Huh? You're speaking as a mathematician?"
t h ray, are you surprised that the examples I gave were from physics rather than mathematics? Well, it is hard (in my opinion impossible) to find an example of ugly in mathematics.
Rogier Brussee said...
I think an important aspect of beauty that is actually a valid guideline for physics, is that a physical description should be as free as possible from arbitrary choices that are made by us humans to provide for a description. Usually this means being closer to the "Copernican principle" that the world is less centred around you and that if you make a choice at a point in space time (e.g. a frame of reference), there is no trivial way to communicate that choice to the rest of the world. It also usually means symmetric, with the symmetry being the group acting on the possible choices that provide descriptions. This often makes it more technical to write things down (although this is mostly a matter of what you are used to), but it also hides lots of distracting information, that you could use to make write down variants that, however, can't be physically relevant, because they depend on an arbitrary choice you made. It is not unlike abstractions in a programming language.
General relativity is more beautiful than field theory + gravitons, because general relativity does not assume a background metric. Of course in every point of space time, you can _choose_ a frame such that $g_{\mu\nu} = \eta_{\mu\nu}$ but now the description involves a choice. Realising that there is a choice to be made, makes life technically harder but pre relativity you made the tacit assumption that your "inertial frame of reference" was trivially agreed upon in the rest of the universe. Not making that assumption is i.m.o. the heart of general relativity.
Sections of U(1) line bundle are better than wave functions, because only phase differences and absolute values at a point make sense. Once you think about it in this way the operator $\partial_\mu = \frac{\partial}{\partial x^\mu}$ makes no sense anymore: you need a U(1) connection $\nabla$ which in a local trivialisation looks like $\nabla_\mu = \partial_\mu + A_\mu$ with (i times) the vector potential $A$. The curvature is (i times) the Faraday tensor, because EM works in exactly this way on the wave function. Even the Bohm Aharanov effect now falls out naturally as the holonomy of a flat connection. The gauge group $\psi \to e^{i\phi} \psi$ can be seen actively as a symmetry, but more naturally passively as the result of different choices of trivialisation of the line bundle.
The Maxwell equations in Heaviside as written today are more beautiful than the equations written in x,y,z that he wrote down himself because the former are manifestly independent of the choice of a frame of reference. LIkewise, the index notation used by physicists is computationally useful, but it is also horrible in that it makes it impossible to even say what you mean without making a choice of reference frame, not to mention actively encouraging not to say what kind of object you are dealing with "because it is "clear" from the indices, and making people think in terms of components and operations on indices. Weinbergs book on quantum field theory starts by writing down the gamma matrices he uses. It always leaves a lingering feeling of unease about what depends on conventions and what not. Mathematicians (or at least geometers) take pride in writing down coordinate independent intrinsic entities whenever possible, and it greatly helps in only writing down expressions that make sense independent of choices, of which there tend to be very few, so it is a great guiding principle. |
a7c7a41b75b0fb34 | Saturday, June 30, 2007
Impressions from the Loops '07
The Loops 2007 this year was held at the University in Morelia, Mexico, in a very nice building (see also Stefan's previous post). The auditorium had a stage with the speaker being in the spotlight, the seats were very comfortable to doze off, and the hallways had very picturesque pillars and arches (see photo to the left).
My head is still spinning a bit, trying to process all the information gathered here (well, it might also be the lack of oxygen in the air, it is rather polluted and the high altitude doesn't help either). It's been my first conference in this community, and I have to say from all the conferences where I've been this year, the Loops has been the nicest experience. The atmosphere has been very welcoming, openminded and constructive.
My talk yesterday on 'Phenomenological Quantum Gravity' (slides here) went well (that is to say, I stayed roughly in the time limit and didn't make any completely embarrasing jokes). Though I wasn't really aware that I would be the only one on this conference speaking about DSR. If I had known, I might have extended my summary of that topic (there was a DSR talk scheduled by Florian Girelli, see picture to the right, but he changed the topic shortly, which I didn't know).
However, after more than a month of traveling, I am sitting here in the inner yard of the hotel and try to remind myself why I go to conferences:
• Because it's just such a nice experience to arrive in a foreign city where nobody understands your language, without any baggage, after a 36 hour trip, with an 8 hour jetlag, having just figured out that the credit card doesn't work, and the hotel doesn't have a reservation for you - and then to find the conference site with familiar faces, the air filled with words like 'propagator', 'manifold' and 'background independence'.
• Because one meets old and new friends, because there's a conference dinner or reception, and plenty of free coffee and cookies.
• Because the registration fee includes a welcome package that usually features a more or less useful bag, a notebook and a pen, and a significant amount of tourist information. Occasionally there are some surprises to that, e.g. this time the bag was a woven shoulder bag, or one of the previous SUSY conferences featured a squeezable brain...
• Because some people present new and so far unpublished results.
• To see and be seen.
• To inspire and get inspired...
• And of course to blog about it ;-)
There has been a significant amount of braiding on this conference (for program and abstracts see here), talks by Yidun Wan, Jonathan Hacket, Lee Smolin and Sundance Bilson-Thompson, the latter shown in the picture below with John Swain
Olaf Dreyer, preparing his talk
Some people from the blogosphere that I've meet in person here. Garrett Lisi and Frank Hellmann:
Alejandro Satz (from Reality Conditions) and Yidun Wan (from Road To Unification)
And here is a photo of Carlo Rovelli and Abhay Ashthekar - pen, notebook and coffee included:
Admittedly, I found the phenomenlogical part on this conference somewhat underrepresented. Indeed, I found myself joking I am the phenomenology of the conference! Likewise, Moshe Rozali (who you might also know from comments on this blog) has been the String Theory of the conference. He gave a very interesting talk about the meaning of background independence.
This afternoon, I am chilling out (okay, actually I am writing referee reports that have been due about a month ago). Below a picture of the hotel's inner yard where I am sitting (taken yesterday, right now it is raining). Tomorrow I am flying back to Canada, and I am really looking forward to sleeping in my own bed. A nice weekend to all of you!
[Try clicking on the photos to get a larger resolution.]
Updates: Chanda just sent me a photo she took yesterday evening. I am very pleased about the truly intellectual expression on my face, must be the glasses.
And Garrett took this nice photo.
Friday, June 29, 2007
Philosophia Naturalis Blog Carnival
I have to admit that it took some time until I understood the idea of the "Philosophia Naturalis Blog Carnival", even more so as the actual blog just contains links to posts at different other blogs. This idea of these "philosophia naturalis" contributions, hosted by a different blog every month, is to publish a collection of interesting posts on topics in the physical sciences that have appeared over the last month. Thus, they provide a selection of noteworthy reading out there, and help to give you an overview over interesting blogs dealing with physics and related topics.
This month's Philosophia Naturalis #11 is hosted by geologist Chris Rowan at Highly Allochthonous, and I am very proud to see this blog represented by both its contributors - with Bee on Kaluza-Klein, and myself about the Bouncing Neutrons!
If you have time to waste over the weekend, there may be worse possibilities to do so than reading some of the posts presented by Chris so cogently according to the 50 orders of magnitude of characteristic length scales they cover. I've enjoyed especially the writings from astronomy and the earth sciences. Indeed, if you are a wannabe amateur geologist like me, you will probably like Chris' blog anyway, and wonder why you have not chosen a subject where you can make cool field trips to Namibia...
Have a nice weekend!
Wednesday, June 27, 2007
Ever heard of of planet Eris?
This is planet Eris, in the outskirts of the Solar System, as seen through the eyes of the Hubble Space Telescope:
Hubble Telescope's Advanced Camera for Surveys has taken this image of Eris in December 2005. The analysis of several such photos yields a diameter of Eris of 2400±100 km, or 1500±60 miles. (Credits: HubbleSite News Release STScI-2006-16, M. Brown)
It's such a faint and blurred blob even in the Hubble Space Telescope because it is quite small, and far away from Earth: At the moment, Eris is close to its aphelion, at a distance of 97 astronomical units - that's 97 times the mean distance of the Earth from the Sun! You can explore its eccentric and highly inclined orbit with this applet from the JPL.
In fact, according to the new definition of the International Astronomical Union from last August, Eris is not a planet, but only a dwarf planet. However, it is larger than Pluto, and it has more mass than Pluto, as was reported in a beautiful short note in Science two weeks ago [Ref 5]! So, for all those who prefer to stick to the old definition of a planet, it may be the true ninth planet - unless some other, even larger guy shows up from out there in the Kuiper belt.
Eris was discovered in October 2003 by astronomers Michael Brown, Chad Trujillo, and David Rabinowitz [Ref 1]. It was given the provisional name 2003 UB313, or Xena for short. Soon after the discovery, its size could be measured by observations with a radio telescope of the Max-Planck Society [Ref 2] and the Hubble Space Telescope [Ref 3], and it came out that the new planet is larger than Pluto!
Moreover, there is a small moon in orbit around Eris, which was given the nickname Gabrielle. On September 14, 2006 the International Astronomical Union made official the names Eris for the planet and Dysnomia for its satellite.
A comparison of the sizes of (left to right) Eris, Pluto and Charon, the Moon, and the Earth. Eris' moon Dysnomia is not shown, but it is much smaller than Eris (diameter: 2400±100 km from HST, 3000±400 km from radio observation), Pluto (2300 km), Charon (1200 km), the Moon (3500 km), and the Earth (12800 km). (Credits: Max-Planck Gesellschaft Press Release News/SP/2006(10), Frank Bertoldi)
The discovery of Eris, and of several other large objects in orbits beyond Neptune, had spurred the hot debate about what is a planet, which then lead to the degradation of Pluto from planet to "dwarf planet" in August 2006. So it is fitting that Eris is the Greek goddess of strife and discord - Dysnomia, her daughter, is the goddess of lawlessness.
But in a time when ancient Greek gods and goddesses are reduced to large balls of rock and gas in space, even the goddess of lawlessness is subject to Newton's universal law of gravitational attraction. And this allows, in an elementary and elegant, classical way, to determine the mass of Eris.
The photo on the left is an image of Eris and its moon Dysnomia, the small spot left of Eris, taken with the Hubble Space Telescope on 30 August 2006. Dysnomia's projected orbit around Eris is superimposed on this photo in the image on the right (Credits: HubbleSite News Release STScI-2007-24, M. Brown)
In a first step, the orbit of Dysnomia around Eris has been determined from several observations with the Hubble Space Telescope and the Keck telescope in Hawaii.
The orbit of Dysnomia, the moon of Eris, as reconstructed from observations using the Keck telescope on 20, 21, 30, and 31 August 2006 and the Hubble Space Telescope on 3 December 2005 and 30 August 2006. Observations are show as crosses, the predicted positions at the time of observations are shown by circles. The solid circle in the center is 10 times the actual angular size of Eris. (Fig. 1 from Brown and Schaller, Science 316 1585 (2007 June 15), doi:10.1126/science.1139415. Reprinted with permission from AAAS.)
The observation of Dysnomia as shown in this figure reveals an apparent diameter of the orbit of roughly 1.1 arcsec, which, at the distance of 97 Astronomical units, or 97 × 150 million km ≈ 1.46·1010 km, corresponds to a diameter of 1.46·1010 × 1.1 × 2π / (360 × 3600) km ≈ 78000 km. A more detailed analysis yields an indeed circular orbit with a semimajor axis of r = 37500±200 km.
Now, this can be used to deduce the mass ME of Eris!
Using elementary Newtonian mechanics, we equal the gravitational attraction between Eris and Dysnomia at distance r with the centrifugal force,
GMEmD/r2 = mDrω2.
The frequency ω = 2π/T has to be calculated from the orbital period T of Dysnomia. With the equivalence principle at work, the mass of the satellite cancels out, and we can solve for the mass of Eris:
ME = r3ω2/G.
Now, the orbital period of Dysnomia has been determined quite precisely to 15.772±0.002 days. This yields ω = 2π/(15.77 d × 24 h/d × 3600 s/h) ≈ 4.63·10-6 s-1. Newtons constant is 6.673·10-11 m3 kg-1 s-2, and plugging everything together, the mass of Eris comes out as
ME = 1.7·1022 kg.
For comparison, Pluto has a mass of 1.3·1022 kg, the Moon of 7.35·1022 kg, and the Earth of 5.97·1024 kg. This means, the masses of Eris, Pluto, the Moon, and the Earth compare as 1.3 : 1 : 5.6 : 459.
And so, indeed, Eris and Pluto are on equal footing, revealed by the goddess of lawlessness and Newton's law!
Here are some "historical" references about Eris:
1. Discovery of a Planetary-sized Object in the Scattered Kuiper Belt, by M.E. Brown, C.A. Trujillo, and D.L. Rabinowitz, The Astrophysical Journal 635 L97-L100 (2005 December 10) [PDF, ADS entry], arXiv:astro-ph/0508633 [see also: Eris webpage by Michael Brown]
2. The trans-neptunian object UB313 is larger than Pluto, by F. Bertoldi, W. Altenhoff, A. Weiss, K.M. Menten and C. Thum, Nature 439 563-564 (2006 February 2) [PDF, ADS entry], doi:10.1038/nature04494 [see also: Press release by the Max-Planck Gesellschaft, comment by Frank Bertoldi]
3. Direct Measurement of the Size of 2003 UB313 from the Hubble Space Telescope, by M.E. Brown, E.L. Schaller, H.G. Roe, D.L. Rabinowitz, and C.A. Trujillo, The Astrophysical Journal 643 L61-L63 (2006 May 20) [PDF, ADS entry] [see also: Press Release by the HubbleSite]
4. Satellites of the Largest Kuiper Belt Objects by M.E. Brown et al., The Astrophysical Journal 639 L43-L46 (2006 March 1) [PDF, ADS entry], arXiv:astro-ph/0510029 [see also: Press Release by the Keck Observatory, News Item by Marcos van Dam, Dysnomia, the moon of Eris - webpage by Michael Brown]
5. The Mass of Dwarf Planet Eris by Michael E. Brown and Emily L. Schaller, Science 316 1585 (2007 June 15), doi:10.1126/science.1139415 [see also: Press Release by the HubbleSite News item by the Planetary Society, Press Release by the Keck Observatory]
TAGS: , ,
Monday, June 25, 2007
Loops'07 in Morelia
(Credits: Wikipedia)
Sent from a researcher in motion
Saturday, June 23, 2007
The GZK cutoff
The earth is hit by cosmic rays all the time. Those with the highest energies collide with atoms already in the upper atmosphere, and produce a cascade of secondary particles, a so-called cosmic ray shower. These secondary cosmic rays include pions (which quickly decay to produce muons, neutrinos and gamma rays), as well as electrons and positrons produced by muon decay and gamma ray interactions with atmospheric atoms.
Nowadays, the showers can be simulated with appropriate software. The picture below, from Hajo Drescher, illustrates such a cosmic ray shower
Here, the primary particle was a proton with an energy of 1019 eV, the colors indicate blue: electrons/positrons, cyan: photons, red: neutrons, orange: protons, gray: mesons, green: muons. (Unfortunately, one can't see the colors very clearly, you can decompose the shower into colors on the website. The incoming proton is the line from the upper left, the other upgoing line is cyan and a photon). If you have Quicktime installed, you can also look at this very illustrative movie, which shows the particles cascading down on earth. The above figure has be created using the software SENECA (down-loadable here), the competitor is AIRES, which has a somewhat more impressive advertisement movie (the exact differences between both codes elude me).
The number of particles reaching the earth's surface is related to the energy of the cosmic ray that struck the upper atmosphere. Cosmic rays with energies beyond 1014 eV are studied with large "air shower" arrays of detectors distributed over many square kilometers that sample the particles produced, e.g. at HiRes in Utah, AGASA in Japan and Pierre Auger in Agentinia, the latter has a very nice homepage, summarizing the mysteries that still need to be solved.
Energies over 1014 eV sounds extremely large. In comparison, the collision energy that the LHC will reach is 1013 eV. However, one has to keep in mind that in cosmic ray events the energy is typically that of the incoming particle in the earth rest frame and not actually the collision energy in the center of mass frame (LHC collides two beams head on, thus the lab frame is identical to the center of mass frame).
To give you an example, the energy in the center of mass frame of an incoming proton with an already extremely high (and rare) energy of 1017 eV hitting a proton in rest is roughly the square-root of 1017 eV times the proton rest-mass, 109 eV, which is approx 1013 eV and comparable to LHC energies. However, one has to keep in mind that cosmic ray events, despite their potentially large energy, are far less in control and attached with higher uncertainties than collider experiments. Most of the air showers are believed to be created by protons. Since the incoming directions are evenly distributed (and inside our galaxy no mechanism is known to accelerate them to these high energies) the proton's origin is most likely not in our galaxy. That means the protons must have travelled at least roughly 50 Mpc [1] before they reach earth.
Now, if the incoming proton's energy increases further, then eventually it will not only react with our atmosphere, but also with the photons in the cosmic microwave background (CMB). That is, for photons with sufficiently high energies, the universe will stop being transparent. The protons will start to scatter on the photons in the microwave background, loose energy and can't reach earth any more. The first reaction that can take place with increasing energy is photo-pion production which happens at a center of mass energy of roughly 200 MeV. This pion production is extremely well measured in earth's laboratories, where photons are scattered on nuclei in rest. If one sets the energy of the photon to be that of the CMB temperature (3 K is approximately 2.5 10-4 eV), one finds that the proton needs an energy of roughly 1021 eV to cross the threshold for pion production. (It is roughly (200 MeV)2 divided by the photon's energy).
The figure to the left (credits go to Stefan) shows the cross-section for photon-proton scattering in the laboratory (proton in rest), the blue dots are data from the particle data booklet. The red line indicates the initial threshold for the process to take place, the orange lines are the delta resonances where the cross-section has peaks.
However, what one actually wants to know is when the mean free path of the protons drops below typically 50 Mpc. To get a better result than the above estimate one has to take into account that the CMB has a small percentage of photons with larger energy than the temperature, the distribution given by the Planck spectrum. Such, the mean free path of the protons drops significantly already at a somewhat smaller energy than the above 1021 eV because the proton has a chance to hit the higher energetic photons.
My husband, as usual, has made a lot of effort to answer my yesterday's question and produced the figure to the right, which very nicely illustrates that indeed roughly 10% of the photons have energies five times larger than the background temperature.
The exact calculation for the cut-off has been done by Stecker (Effect of Photomeson Production by the Universal Radiation Field on High-Energy Cosmic Rays), and one finds the drop to happen at roughly 6 x 1019 eV. At this energy, one thus expects a cut-off in the cosmic ray spectrum because the sources for the showers can no longer reach earth. First pointed out by Greisen[2], Zatsepin and Kuzmin this is referred to as the GZK-cutoff. To summarize, this prediction relies on
a) The initial particle of the shower being a proton from outside our galaxy
b) The total cross-section of protons with photons, and
c) The assumption that the cross-section (a Lorentz scalar itself) can be boosted from the earth laboratory (proton in rest) into the rest-frame of the CMB (photon in rest).
Now AGASA claimed to have observed cosmic ray events with energies above this cut-off (see e.g. Has the GZK suppression been discovered, by Bahcall and Waxman). This has lead to a significant amount of speculation how this could be explained. One of the explanations for example is that a violation or deformation of Lorentz invariance might be the cause for a shift in this threshold, which has been argued to be a signature for quantum gravity (see e.g. Alfaro and Palma, Loop quantum gravity corrections and cosmic ray decays, hep-th/0208193).
I have explained previously that I find these explanations implausible - as mentioned above, the energy in the center of mass frame is somewhere around a GeV, now could please somebody explain me why on earth (pun intended) you'd expect quantum gravitational effects in that energy range?
One should also keep in mind, that HiRes on the other hand has observed the cutoff where it is supposed to be. Their new data analysis again confirms the cutoff. The recent paper is here, and last month there was a brief article in Physics Today "Fluorescence Telescopes Observe the Predicted Ultrahigh-Energy Cutoff of the Cosmic-Ray Spectrum" by Betram Schwarzschild.
It is expected that Pierre Auger will present first results at the 30iest International Cosmic Ray Conference, which will take place in Merida, Yucatan, Mexico from July 3 - 11, 2007. Hopefully, the situation will be clarified then.
So stay tuned...
I too am on my way to Mexico, to another conference, the Loops 2007.
[1]: Mpc means Mega parsec. Mega is 106, one parsec is approx 3 lightyears.
[2]: Kenneth I. Greisen, Cornell professor emeritus of physics and a pioneer in the study of cosmic rays, died March 17 2007 at age 89.
, ,
Thursday, June 21, 2007
Random Sampling: Scientific American, October 1960
it's all "rocket science":
Monday, June 18, 2007
Save Money on Health Insurance
Saturday, June 16, 2007
Trains and Airplanes
Taking the plane is usually a fast, and often a quite convenient way of travelling. But sometimes, thunderstorms interfere with the flight plan, and then it can happen that all flights but one from Munich to Frankfurt are cancelled. And if you have bad luck, you are sitting at the airport and wait and wait ... only to hear that you have either to spend the night there, or take a train, which, if you would have done that earlier, you have brought to your destination already since hours... That's what has happened to Bee yesterday - and that's why I'll meet her now this morning at the train station, instead of the airport yesterday evening.
Meanwhile her cellphone ran out of battery, so here is the last mail (rough translation):
From: sabine[@]
To: scherer[@]********.de
Subject: still in munich
I missed the onward flight to Frankfurt and was rebooked to the next flight. Right now there's a really impressive thunderstorm outside, one can't see anything except loads of water running down the window and an occasional lightning. Unfortunately, it seems our aircraft was struck by a lightning during touch down and it's not yet clear whether it can take off again [it turned out later it had to go out of service].... *argh* there was just an announcement that the airport is temporarily closed. Will try to find an outlet, I am running out of battery... say hello to the blogosphere ;-)
To avoid that I've forgotten how she looks like after all these delays, she has send me a recent photo taken at the Warsaw conference vine and cheese reception - one of the more important events at every conference:
Thanks to Akin Wingerter for the photo - and I am off for the train station.
Friday, June 15, 2007
Positively Crazy
Some inspiriation for your next seminar:
"better get used to staying up all night" - yo, man.
Wednesday, June 13, 2007
I meant to tell you something about the String Pheno, but the slides are still not online. So, instead just a lovely photo from the old part of the village, Frascati, where it took place. I am on my way back to Italy, this time to Trieste where I will attend the workshop From Quantum to Emergent Gravity: Theory and Phenomenology.
Tuesday, June 12, 2007
Blog Life
My husband just sent me an email, reporting that a former colleague pointed out this very interesting article at
which is part of a monthly column:
You see, we are in good company :-) I am very flattered by such honor and find the text remarkably accurate (well, I didn't know how to spell 'Landolt-Börnstein').
The mentioned Aero chocolate is here, and the 'humorous take on the recent debate over the status of string theory' is here. The 'lengthy and thoughtful posts' as well as the inpiration series you find in the sidebar.
Monday, June 11, 2007
Can you spot a fake smile? Take the test.
"Scientists distinguish between genuine and fake smiles by using a coding system called the Facial Action Coding System (FACS), which was devised by Professor Paul Ekman of the University of California and Dr Wallace V. Friesen of the University of Kentucky."
Especially recommendable when sitting in the afternoon session while the sun outside is shining brightly...
Why Are There Always So Many Other Things To Do?
Distractions, Like Butterflies Are Buzzing 'Round My Head...
~ Paul McCartney, Distractions
Saturday, June 09, 2007
Femme fatale, post mortale
Oh well, I promised you photos from Europe, didn't I? Here's how you look like if you hang above a door frame for too long ;-)
Seen somewhere in Rome around here, close by the Piazza del Quirinale. Still think you like buildings where you can feel the presence of the ancestors?
Friday, June 08, 2007
Hello from Warsaw!
Thursday, June 07, 2007
The Early Extra Dimensions
I have been fascinated by the idea of extra dimensions (XDs) long before I finished high school. I, as apparently many other theoretical physicists, was a science fiction fan then. Besides randomly reading everything labeled with 'SF', I made my way through a considerable part of the Perry Rhodan series, which I loved for picking up political topics and projecting them into the future [1]. Admittedly, the technical details somehow escaped me, especially when it came to time travel and the hyperspace.
This is just to say that the topics of hyperspace and XDs have inspired generations of physicists. And whoever it was who first did the calculation that shows string theory needs extra dimensions to make sense, it must have been one of the most exciting moments I can imagine for a theoretical physicist.
But XDs have come a long way, and were around long before string theory. People sometimes ask me why my talks never mention the earlier works on the topic. The reason is that the theories with XDs proposed in the 1920ies by Theodor Kaluza and Otto Klein, are in their idea different to the 'modern' XDs. Yet, this usually takes too much time to clarify in a talk, so I rather skip it. However, since you - and yes, I mean YOU who you are just raising your eyebrows - are of course the most attentive reader there is, I want to elaborate somewhat on these 'early' XDs since I noticed very little people actually read the original works by Kaluza and Klein.
General Relativity
The first mentioning of adding another dimensions to our three space-like dimensions that we experience every day goes to my knowledge back to Nordström in 1913 [2]. He however did not yet use General Relativity (GR) to build his theory upon. Since we know today that the gravitational potential is not a scalar field, but described by the curvature of space time, let us skip to the next attempt which uses GR as we know it today.
GR couples the metric tensor (g) to a source term of matter fields, whose characteristics are encoded in the stress-energy tensor of the matter. All kind of energy and matter results in such a source term, and hence causes the metric to deviate from flat space. This theory does not say anything about the origin of the source terms. The matter and its properties have to be described by another theory - for example by electrodynamics. Electrodynamics on the other hand has a similar problem. The source for the electromagnetic field (charged particles) is not described by Maxwell's equations [5]. They need to be completed by further equations, e.g. the Dirac equations.
In the beginning of the last century, physicists had just understood gravity as a geometrical effect instead of a field in Minkowski space, so it was only natural to try the same for other fields as well, with the obvious next choice being the electric field. The idea of the early XDs is plain and simple. Einstein's field equations are a set of non-linear differential equations for the metric tensor. They are built up of the Ricci-tensor (two indices) which is a contraction over the full curvature tensor (four indices), and the curvature scalar - a further contraction of the Ricci-tensor. Such a contraction is basically a sum over two indices. The indices on these tensors label space-time directions - that is, in the standard case of GR with three space and one time dimensions, they run from 1 to 4 (or, depending on taste, sometimes from 0 to 3).
Now if one had an additional dimension, then two things happen with Einstein's field equation. First, one has more equations because there are more free indices. Since the Ricci tensor and the metric are symmetric, the number of independent equations is D(D+1)/2, here D is the total number of dimensions. The second thing happening is that the equations with the indices belonging to the 'usual' directions acquire additional terms since the sum runs over the additional indices as well. The trick is then to separate the usual part (sum from 1 to 4) from the additional part (sum over the extra dimension), shift the additional part to the other side of the equations, and read it as a source term. In such a way, one obtains a source term even if the higher dimensional field equations were source free.
Kaluza and Klein
The result is that components of the higher-dimensional metric tensor appear as source terms for the four-dimensional sub-sector that we observe. The first such approach was Theodor Kaluza's whose ansatz uses one additional dimension. In the remaining entries of the metric tensor (those with one index being a 5) he put the electromagnetic potential with a coupling constant alpha (since the metric tensor is dimensionless but the electromagnetic potential isn't)
(Here, the large Latin indices run over all dimensions, the small Greek indices over the usual four dimensions). Kaluza apparently sent a draft of his paper to Einstein in 1919, to ask for his opinion. It got published with a delay of two years [3].
Kaluza derived the higher dimensional field equations in the linear approximation. Generically, all the components of the metric tensor will be functions of all coordinates, including the additional one. This however is in conflict with what we observe. Kaluza therefore added what he called the 'cylinder condition' that set derivatives with respect to the additional coordinates to zero. In the linear approximation, he then found the ansatz to reproduce GR plus electrodynamics.
However, the use of this linear approximation is not necessary, as was shown by Oskar Klein five years later [4]. Klein used a different ansatz for the metric which has an additional quadratic term:
(Sorry, coupling constant is missing, my fault not Oskar's) And he assumed the additional coordinate is compactified on a circle. Then, one can expand all components in a Fourier-series and the zero mode will fulfill Kaluza's cylinder condition' that is, it is independent of the fifth coordinate. However, if you compare both ansätze [7], Klein's and Kaluza's, you will notice that Klein set the g55 component to be constant to one. This is an additional constraint that generally will not be fulfilled. In fact, the additional entry behaves like a scalar field and describes something like the radius of the XD. At this time however, people had little for additional scalar fields.
Klein's derivation is simply one of the most beautiful calculations I know. One just writes down the higher dimensional field equations, parametrizes the metric tensor according to Klein's ansatz, decomposes the equations - and what comes out is GR in four dimensions (in the Lagrangian formulation as well as the field equations), plus the free Maxwell equations.
(Here, the supscript (4) and (5) refer to the 4 and 5 dimensional part of the curvature/metric). Further, the geodesic equation gets an additional term which is just the Lorentz force term and thus describes a charged particle moving in a curved space with an electromagnetic field.
In the course of this derivation, one is lead to identify the momentum in the direction of the fifth coordinate as the ratio of charge over mass (q/m). It can be shown that this quantity is conserved as it should be. Klein concluded that this charge is quantized in discrete steps (this is a geometrical quantization), the first example of the Kaluza-Klein tower.
Extensions and Problems
To understand the excitement this derivation must have caused one has to keep in mind that this was 30 years before Yang and Mills, and the understanding of gauge theory was not on today's status. With today's knowledge, the argumentation appears somewhat trivial. One adds an additional dimension with U(1) symmetry, the compactified dimension. The resulting theory needs to show this symmetry that we know belongs to electrodynamics. From this point of view, it is only consequential to extend the Kaluza-Klein (KK) approach to other gauge symmetries, i.e. non-abelian groups. This was done in 1968 [6].
One has to note however that for non-abelian groups the curvature of the additional dimensions will not vanish, thus flat space is no longer a solution to the field equations. However, it turns out that the number of additional dimensions one needs for the gauge symmetries of the Standard Model U(1)xSU(2)xSU(3) is 1+2+4=7 [10]. Thus, together with our usual four dimensions, the total number of dimensions is D=11. Now exponentiate this finding by the fact that 11 is the favourite number for those working on supergravity, and you'll understand why KK was dealt as a hot canditate for unification.
But there are several problems with the traditional KK approach. First, meanwhile the age of quantum field theory had begun, and all these considerations have been purely classical and unquantized. Even more importantly, there are no fermions is this description - note that we have only talked about the free Maxwell equations. The reason is easy to see: fermions are spin 1/2 fields and unlike vector bosons one can not just write them into the metric tensor. One can of course add additional source terms, but this makes the idea somewhat less appealing [8]. The high hope had been to explain all matter and fields from a purely geometric approach.
If one thinks more about the fermions, one notices another problem: right- and left-handed fermions belong to different electroweak representations, a feature that is hard to include in a geometrical interpretation. Furthermore, there is the problem of stabilization of the compact extra dimensions (the sizes should not or only negligibly depend on the time-like coordinate), and the problem of singularity formation from GR persists in this approach. However. If I consider what landscape of problems other theories suffer from, it makes me wonder why the KK approach was so suddenly given up in the early 70ies. A big part of the reason might simply have been that the quark model got established, and it was the dawn of the particle-physics era.
The 'modern' extra dimensions differ from the KK approach by not attempting to explain the other standard model fields as components of the metric. Instead, fermionic- and gauge-fields are additional fields that are coupled to the metric. They are allowed to propagate into the extra dimensions, but are not themselves geometrical objects. Most features of the KK approach remain, most notably the geometrical quantization of the momenta into the extra dimensions and thus the KK-tower of excitations. So remains the problem of stabilization, singularities and quantization (for higher dimensional quantum field theories the coupling constants become dimensionful). However, for me this 'modern' approach is considerably less appealing as one has lost the possibility to describe gauge symmetries and standard model charges as arising from the same principle as GR.
But obviously, the largest problem with the KK approaches was - and still is - that it is not clear whether it is just a mathematical possibility or indeed a description of reality. As Oskar Klein put it in 1926:
"Ob hinter diesen Andeutungen von Möglichkeiten etwas Wirkliches besteht, muss natürlich die Zukunft entscheiden."
"Whether these indications of possibilities are built on reality has of course to be decided by the future." [9]
[1] And if you read the wikipedia entry on Perry Rhodan you find a use for the word 'Zeitgeist'...
G. Nordström, "Zur Theorie der Gravitation vom Standpunkt des Relativitätsprinzips" Annalen der Physik, vol. 347, Issue 13, pp.533-554 (1913); G. Nordström, "Über die Möglichkeit, das elektromagnetische Feld und das Gravitationsfeld zu vereinigen" ( "About the possibility to unify the electric field and the gravitational field" ) Physik. Zs. 15, 504-506 (1914) [Abstract]
[3] T. Kaluza, "Zum Unitätsproblem der Physik'' ("On the Problem of Unity in Physics, Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys.) (1921) 966.
[4] O. Klein, "Quantentheorie und fünfdimensionale Relativitätstheory" ("Quantum Theory and fivedimensional General Relativity", Z. Phys. 37, 895 (1926).
[5] Unlike to what the Wikipedia entry states, the Lorentz force law can not be derived from the Maxwell equations without further assumptions (like a Lagrangian for the coupled sources). E.g. Maxwell's equations are perfectly consistent for a static superposition of two negativley charged objects, just that we know the charged particles would repel and the configuration can't be static.
[6] R. Kerner, "Generalization of the Kaluza-Klein theory for an arbitrary non-abelian gauge group", Ann. Inst. Henri Poincare, 9, 143-152 (1968)
[7] Contrary to the wide spread believe, the plural of the German word 'Ansatz' is not 'Ansatzes' but 'Ansätze' (pronounced 'unsetze'). 'Ansatz' could be roughly translated as 'a good point to start', or a preparation. E.g. the pre-stage for yeast dough is called 'Ansatz'...
[8] Which finally brings us to the topic on which I lost two years during my Ph.D. time, namely the question whether one can built up the metric tensor from spin 1/2 fields. I only learned considerably later that most of this approach had been worked out in the mid 1980ies, see e.g. hep-th/0307109 and references therein.
[9] He indeed writes it has to be decided 'by' the future not 'in' the future. Quotation from Ref. [4]
Further Litarature
Hello From Rome!
On the weekend, I flew to Rome to addend the String Pheno 2007. Meanwhile, my baggage decided to have a vacation in Palermo. It arrived with four days delay yesterday evening. I've been wearing the same clothes since the weekend, but this morning I saw myself faced with an incredible selection! A second jeans! Two T-shirts! A dress!
However, despite these inconveniences, I had a so far very pleasant stay since it turned out that Amara Graps (you might know her from the blogosphere) lives nearby. She was so kind to borrow me some clothes, and yesterday we spent a very nice afternoon in Rome. Since I am currently sitting in mentioned conference (and should at least pretend to listen) let me instead show you a photo.
Okay, now I have to prepare my talk... later more...
Wednesday, June 06, 2007
arXiv User Survey
Just a quick link: in case you ever wanted to express your opinion about the arXiv, take their poll (from June 4-8)
It will take approximately 20 minutes, and has plenty of comment options to complain about the new arXiv listing (or the eternal bug in the search field if you search for a tag containing the word 'not').
Other points that I find worth mentioning: the arxiv should allow comments on papers, and a ranking (different from times cited). Comments would be helpful to avoid the increasing amount of 'reply-to-reply-to-reply-to's, ranking I would find a good idea because it's become almost impossible to find a good review or lecture notes if one doesn't know the author (and lecture notes don't usually become top-cites).
Monday, June 04, 2007
The Teddy Factory
A somewhat belated 'Hello' from Germany!
What is new here? Well, the whole country is upset about the G8 summit in a city that nobody had ever heard of before. Border security has been enhanced, and Lufthansa had all passengers sniffed at by a huge dog. I can also report that Lufthansa meal sizes have increased again, they still serve liquor for free, and my flight was otherwise a classical chicken-or-pasta event. Besides this: congratulations America! When I was here last time late 2006, newspapers were praising the USA for remembering democracy, now you've succesfully regained global-asshole status (I am currently not in the mood to elaborate on global energy scarcity, for an extended version of my opinion, see Global Warming.)
Sure sure, America is still faster, bigger, better: Germany still doesn't have penny trays (I consider that to be one of the most important advantages of the USA), they still don't know what 'cash back' is, and shops are still closed when I finally find the time to go there.
Something completely different: since all-my-mother's-children have moved out and the cat died, the house gets populated by an ever increasing amount of handmade Teddy bears.
[Figure: Representavive sample of the German population
-- Can you find out which item does not fit in?]
Friday, June 01, 2007
Bouncing Neutrons in the Gravitational Field
I remember a moment of excitement and puzzlement early on in my first class in quantum mechanics, when our professor announced that now, he would discuss the "freier Fall" in quantum mechanics. I was excited, because it seemed great to me to transfer such an elementary situation as the free fall of a stone into the realm of quantum mechanics, and puzzled, because I knew that the gravitational potential is so extremely weak that it can be safely ignored on scales where quantum mechanics comes into play - at least, in most cases. Alas, that lecture was quite a disappointment, because of an ambiguity of the German wording "freier Fall": it can mean both the free "fall", and the free "case" (as in Wittgenstein's famous dictum "Die Welt ist alles, was der Fall ist.") - and what we learned in our lecture was just about the free case, the quite boring plane wave motion of a quantum particle subject to no potential whatsoever.
What we did not learn in our class was that, even back at that time, there had been several clever experiments with neutrons which demonstrate the influence of the gravitational potential on the phase of the neutron wave function using interferometers. Neutrons, of course, are ideal particles to perform such experiments, since they have no electric charge and are not subject to the influence of the ubiquitous electromagnetic fields.
But only over the last few years, new experiments have been realised that show directly the quantisation of the vertical "free fall" motion of neutrons in the gravitational field of the Earth. I had heard about them some time ago in connection with their possible role for the detection of Non-Newtonian forces, or the modifications of Newtonian gravity at short distances. Then, earlier this year, I heard a talk by one of the experimenters at Frankfurt University, and I was quote fascinated when I followed the papers describing the experiments.
The essential point of these experiments is the following: If you prepare a beam of very slow neutrons - with velocities about 10 m/s - you can make them hop over a reflecting plane much like you can let hop a pebble over the surface of a lake. Then, you can observe that the vertical part of the motion of the neutrons - with velocities smaller than 5 cm/s - is quantised. In fact, one can detect the quantum states of neutrons in the gravitational field of the Earth! Let me explain in more detail...
Free Fall in Classical Mechanics ...
In order to better understand the experiment, let's go back one step and consider the very simple motion of an elastic ball which is dropped on the ground. If the ground is plane and reflecting, and the ball is ideally elastic such that there is no dissipation of energy, the ball will jump back to the height of where it was dropped from, fall down again, jump back, fall, and so on. The height of the ball over ground as a function of time is shown as the blue curve in the left of this figure: it is simply a sequence of nice parabolas.
We can now ask, What is the probability to find the bouncing ball in a certain height above the floor? For example, we could make a movie of the bouncing ball, take a still at some random time, and check the distribution of the height of the ball if we repeat this for many random stills. The result of this random sampling of the bouncing motion of the ball is the probability distribution shown in red on the right-hand side of figure. The probability to find the ball at a certain height in this idealised, "stationary" situation, where the elastic ball is bouncing forever, is highest at the upper turning point of the motion, and lowest at the bottom, where the ball is reflected.
... and in Quantum Mechanics
So much for classical mechanics, as we know it from every-day life. In quantum mechanics, unfortunately, there is not anymore such a thing as the path of a particle, with position and velocity as well-defined quantities at any instant in time. However, it still makes sense to speak of stationary states, and of the probability distribution to find a particle at a certain position. In quantum mechanics, it is the wave function which provides us with this probability distribution by calculating its square. And the law of nature determining the wave function is encoded in the famous Schrödinger equation. The Schrödinger equation for a stationary state is an "eigenvalue equation", whose solution yields, at the same time, the wave function and the value of the energy of the corresponding state. For the motion of a particle in a linear potential - such as the potential energy mgx of a particle with mass m at height x above ground in the gravitational field with acceleration g at the surface of the Earth - it reads
In some cases, there are so-called "exact solutions" to the Schrödinger equation - wave functions that are given by certain functions one can look up in thick compendia, or at MathWorld. These functions usually are some beautiful beasts out of the zoo of so-called "special functions". Such is the case for the motion of a particle in a linear potential, where the solution of the Schrödinger equation is given by the Airy function Ai(x). Interestingly, this function first showed up in physics when the British astronomer George Airy applied the wave theory of light to the phenomenon of the rainbow...
Quantum States of Particles in the Gravitational Field
As a result of solving the Schrödinger equation, there is a stationary state with a minimal energy - the ground state - and a series of excited states with higher energies. Here is how the wave function of the second excited state of a particle in the gravitational field looks like as a function of the height above ground:
The wave function, shown on the left in magenta, oscillates through two nodes, and goes down to zero exponentially above the classical limiting height, which corresponds to the upper turning point of the parabola of a classical particle with the same energy. For neutrons in this state, this height is 32.4 µm above the plane. The green curve on the right shows the probability density corresponding to the wave function. It is quite different from the classical probability density, shown in red. As a characteristical property of a quantum system, there is, besides the two nodes, a certain probability to find the particle above the classical turning point. This is an example of the tunnel effect: there is a chance to find a quantum particle in regions where by the laws of classical physics, it would not be allowed to be because of unsufficient energy.
However, going from the ground state to ever higher excited states eventually reproduces the probability distribtion of classical physics. This is what is called the correspondence principle, and you can see what it means if you have a look at the wave function for 60th excited state: Here, the probability distribution derived from the quantum wave function follows already very closely the classical distribution.
So far, we have been talking about theory: the Schrödinger equation and its solutions in guise of the Airy function. There is no reason at all to doubt the validity of the Schrödinger equation: it has been thoroughly tested in innumerable situations, from the hydrogen atom to solid state physics. However, in all these situations, the interaction of the particles involved is electromagnetic, and not by gravitation. For this reason, it is extremely interesting to think about ways to check the solution of the Schrödinger equation for particles in the gravitational field. As we have seen before, the best way to do this is to work with neutrons, in order to avoid spurious electromagnetic effects.
Bouncing Neutrons in the Gravitational Field
Unfortunately, it is so far not possible to scan directly the probability distribution of neutrons in the gravitational field. However, in a clever experimental setup, one can look at the transmission of neutrons through a channel between with a horizontal reflecting surface where they can bounce like pebbles over a lake, and an absorber ahead. This is a rough sketch of the setup:
The decisive idea of the experiment is to vary the height of the absorber above the reflecting plane, and to monitor the transmission of neutrons as a function of this height. If the height of the absorber is to low, the ground state for the vertical motion of the neutrons does not fit into the channel, and no neutrons will pass the channel. Transmission sets in once the height of the channel is sufficiently large to accommodate the ground state wave function of the vertical motion of the neutron. Moreover, whenever with increasing height of the channel, one more of the excited wave function fits in, the transmission should increase. The first of these steps, and the corresponding wave functions and probability densities, are shown in this figure:
The interesting point now is, can this stepwise increase of transmission be observed in actual experimental data? Here are measured data, and indeed - the first step is clearly visible, and the second and third step can be identified:
Adapted by permission from Macmillan Publishers Ltd: Nature (doi:10.1038/415297a), copyright 2002.
This has been the first verification of quantised states of particles in the gravitational field!
What can be learned
You may wonder if the experiment may not have shown just some "particle in a box" quantisation, since the channel for the neutrons formed by the reflecting plane and the absorber may make up such a box. This objection has been raised, indeed, in a comment paper, and has been answered by detailed calculations, and improved experiments: the conclusion about quantisation in the gravitational field remains fully valid!
However, limits about modifications of Newtonian gravity from this experiment remain restricted. Such a modification would change the potential the neutrons are moving in. For example, a short-range force caused by the matter of reflecting plane could contribute to the potential of the neutrons. However, as comes out, such an additional potential would be very weak and have nearly no influence at all on the overall wave function of the neutron.
Moreover, it is clear that in this experiment, the gravitational field is always a classical background field, which itself is not quantised at all. There may be the possibility that a neutron undergoes a transition from, say, the second to the first quantised state, thereby emitting a graviton - similar to the electron in an atom, which emits a photon when the electron makes a transition. Unfortunately, this probability is so low that it is not reasonable to expect that it may ever be measured....
But all these restrictions do not change at all the main point that this a very exciting, elementary experiment, which could find its way into textbooks of quantum mechanics!
Here are some papers about the "bouncing neutron" experiment:
Quantum states of neutrons in the Earth's gravitational field by V.V. Nesvizhevsky, H.G. Boerner, A.K. Petoukhov, H. Abele, S. Baessler, F. Ruess, Th. Stoeferle, A. Westphal, A.M. Gagarski, G.A. Petrov, and A.V. Strelkov; Nature 415 (2002) 297-299 (doi: 10.1038/415297a) - The first description of the result.
Measurement of quantum states of neutrons in the Earth's gravitational field by V.V. Nesvizhevsky, H.G. Boerner, A.M. Gagarsky, A.K. Petoukhov, G.A. Petrov, H.Abele, S. Baessler, G. Divkovic, F.J. Ruess, Th. Stoeferle, A. Westphal, A.V. Strelkov, K.V. Protasov, A.Yu. Voronin; Phys.Rev. D 67 (2003) 102002 (doi: 10.1103/PhysRevD.67.102002 | arXiv: hep-ph/0306198v1) - A more detailed description of the experimental setup and the first results.
Study of the neutron quantum states in the gravity field by V.V. Nesvizhevsky, A.K. Petukhov, H.G. Boerner, T.A. Baranova, A.M. Gagarski, G.A. Petrov, K.V. Protasov, A.Yu. Voronin, S. Baessler, H. Abele, A. Westphal, L. Lucovac; Eur.Phys.J. C 40 (2005) 479-491 (doi: 10.1140/epjc/s2005-02135-y | arXiv: hep-ph/0502081v2) - Another more detailed discussion of the experimental setup, possible sources of error, and the first results.
Quantum motion of a neutron in a wave-guide in the gravitational field by A.Yu. Voronin, H. Abele, S. Baessler, V.V. Nesvizhevsky, A.K. Petukhov, K.V. Protasov, A. Westphal; Phys.Rev. D 73 (2006) 044029 (doi: 10.1103/PhysRevD.73.044029 | arXiv: quant-ph/0512129v2) - A long and detailed discussion of point such as the "particle in the box" ambiguity and the role of the absorber.
Constrains on non-Newtonian gravity from the experiment on neutron quantum states in the Earth's gravitational field by V.V. Nesvizhevsky, K.V. Protasov; Class.Quant.Grav. 21 (2004) 4557-4566 (doi: 10.1088/0264-9381/21/19/005 | arXiv: hep-ph/0401179v1) - As the title says: a discussion of the constraints for Non-Newtonian forces.
Spontaneous emission of graviton by a quantum bouncer by G. Pignol, K.V. Protasov, V.V. Nesvizhevsky; Class.Quant.Grav. 24 (2007) 2439-2441 (doi: 10.1088/0264-9381/24/9/N02 | arXiv: quant-ph/07702256v1) - As the title suggests: the estimate for the emission of a graviton from the neutron in the gravitational field.
TAG: , |
2eaeb121e06060dd | Feynman's QED
The new edition of Feynman's QED: The Strange Theory of Light and Matter
I was invited to write an introduction to the new edition of Feynman's classic book on quantum electrodynamics. For those interested, here is the introduction.
The story of how we came to know light makes for one gripping drama, complete with twists and turns and reversals of fortune.
The photon is the most visible of all elementary particles: place yourself in a dusty room with one small window open on a sunny day and watch a multitude of the little buggers hurrying across the room. Newton quite naturally thought that light consists of a stream of particles ("corpuscles") but already he had some doubts; even in the 17th century the diffraction of light could be readily observed. Eventually, diffraction and other phenomena appeared to show without doubt that light is an electromagnetic wave. That monument of 19th century physics, Maxwell's equations of electromagnetism, formulated light entirely as a wave. Then Einstein came along and explained the photoelectric effect by postulating light as the sum of little packets ("quanta") of energy. Thus were the word photon and the quantum theory of light born. (Here I will not digress and recall Einstein's famous discomfort with quantum mechanics even though he helped at its birth.) Meanwhile, from the 1920's through the 1940s physicists worked out the quantum behavior of matter ("atoms") thoroughly. Thus, it was all the more puzzling that the quantum behavior of light and its interaction with electrons resisted the efforts of the best and the brightest, notably Paul Dirac and Enrico Fermi. Physics had to wait for three young men, Feynman, Schwinger, and Tomonoga, filled with optimism and pessimism as the case may be from their experiences of the World War, to produce the correct formulation of quantum electrodynamics aka QED.
As probably every reader of the book knows, Richard Feynman (1918-1988) was not only an extraordinary physicist, but also an extraordinary figure, a swashbuckling personality the likes of which theoretical physics has not seen before or hence. Occasionally theoretical physicists would while away an idle moment comparing the contributions of Feynman and Schwinger, both nice Jewish boys from New York and almost exact contemporaries. This senseless discussion serves no purpose, but it is a fact that while Julian Schwinger was a shy and retiring person (but rather warm and good-hearted behind his apparent remoteness) Dick Feynman was an extreme extrovert, the stuff of legends. With his bongo drums, showgirls, and other trappings of a carefully cultivated image enthusiastically nurtured by a legion of idolaters, he is surely the best-loved theoretical physicist next to Einstein.
The brilliant Russian Lev Landau famously had a logarithmic scale for ranking theoretical physicists with Einstein on top. It is also well known that Landau moved himself up half a step after he formulated the theory of phase transitions. I have my own scale of fun, on which I place theoretical physicists I know either in person or in spirit. Yes, it is true: most theoretical physicists are dull as dish water and rank near minus infinity on this logarithmic scale. I would place Schrödinger (about whom more later) on top, but Feynman would surely rank close behind. I can't tell you where I land on my own scale, but I do try to have as much fun as possible, limited by the amount of talent and resources at my disposal.
But what fun Feynman was! Early in my career, Feynman asked me to go to a nightclub with him. One of Feynman's colleagues told me that the invitation showed that he took me seriously as a physicist, but while I was eager to tell Feynman my thoughts about Yang-Mills theory he only wanted my opinion on the legs of the dancing girls on stage. Of course, in the psychology of hero worship, nobody gives two hoots about some bozo of a physicist playing drums and liking showgirls. So all right, my scale is really fun times talent --- Landau's scale with fun factored in, with the stock of Einstein falling and Landau rising (he played some good pranks until the KGB got him.)
Now some thirty years after that night club visit I felt honored that Ingrid Gnerlich of Princeton University Press asked me to write an introduction to a new edition of Feynman's famous book "QED: The Strange Theory of Light and Matter." First a confession: I had never read this book before. When this book came out in 1985 I had just finished writing my first popular physics book "Fearful Symmetry" and I more or less adopted a policy of not reading other popular physics books for fear of their influencing my style. Thus, I read the copy Ingrid sent me with fresh eyes and deep appreciation. I enjoyed it immensely, jotting down my thoughts and critiques as I went along.
I was wrong not to have read this book because it is not a popular physics book in the usual sense of the phrase. When Steve Weinberg suggested in 1984 that I write a popular physics book and arranged for me to meet his editor in New York he gave me a useful piece of advice. He said that most physicists who wrote such books could not resist the urge of explaining everything while the lay reader only wanted to have the illusion of understanding and to catch a few buzz words to throw around at cocktail parties.
I think that Weinberg's view, though somewhat cynical, is largely correct. Witness the phenomenal success of Hawking's "A Brief History of Time" (which I have not read in accordance to the policy I mentioned earlier.) One of my former colleagues here at the University of California, a distinguished physicist who now holds a chair at Oxford, once showed me a sentence from that book. The two of us tried to make sense of it and failed. In contrast, I want to assure all the puzzled readers that every sentence in this book, though seemingly bizarre to the max, makes sense. But you must mull over each sentence carefully and try hard to understand what Feynman is saying before moving on. Otherwise, I guarantee you that you will be hopelessly lost. It is the physics that is bizarre, not the presentation. After all, the title promises a "strange theory."
Since Feynman is Feynman, he chose to go totally against the advice Weinberg gave me (advice which I incidentally also did not follow completely; see my remark below regarding group theory). In the acknowledgement, Feynman decried popular physics books as achieving "apparent simplicity only by describing something different, something considerably distorted from what they claim to be describing." Instead, he posed himself the challenge of describing QED to the lay reader without "distortion of the truth." Thus, you should not think of this book as a typical popular physics book. Neither is it a textbook. A rare hybrid it is instead.
To explain what kind of book this is, I will use Feynman's own analogy, somewhat modified. According to Feynman, to learn QED you have two choices: you can either go through 7 years of physics education or read this book. (His figure is a bit of an overestimate; these days a bright high school graduate with the proper guidance could probably do it in less than 7 years.) So you don't really have a choice, do you? Of course you should choose to read this book! Even if you mull over every sentence as I suggest you do, it should still take you less than 7 weeks, let alone 7 years.
So how do these two choices differ? Now comes my version of the analogy: a Mayan high priest announced that for a fee he could teach you, an ordinary Joe in Mayan society, how to multiply two numbers, for example 564 by 253. He made you memorize a 9 by 9 table and then told you to look at the two digits farthest to the right in the two numbers you have to multiply, namely 4 and 3, and say what is in the 4th row and 3rd column. You said 12. Then you learned that you should write down 2 and "carry" 1, whatever that meant. Next you were to say what is in the 6th row and 3rd column, namely 18, to which you were to add the number you were carrying. Of course, you had to spend another year learning how to "add." Well, you got the idea. This is what you would learn after paying tuition at a prestigious university.
Instead, a wise guy named Feynman approached you, "Shh, if you know how to count, you don't have to learn all this fancy stuff about carrying and adding! All you've got to do is to get hold of 564 jars. Then you put into each jar 253 pebbles. Finally, you pour all the pebbles out onto a big pile and count them. That's the answer!"
So you see, Feynman not only teaches you how to multiply, but also gives you a deep understanding of what the high priests and their students, those people soon to have Ph.D.'s from prestigious universities, are doing! On the other hand, if you learn to multiply Feynman's way you couldn't quite apply for a job as an accountant. If your boss asks you to multiply big numbers all day long, you would be exhausted, and the students who went to High Priest University would leave you in the dust.
Having written both a textbook ("Quantum Field Theory in a Nutshell" henceforth to be referred to as Nutshell) and two popular physics books (including "Fearful Symmetry" henceforth Fearful) I feel that I am quite qualified to address your concerns about what kinds of books to read. By the way, Princeton University Press, the publisher of this book, publishes both Nutshell and Fearful.
Let me divide the readers of this introduction into three classes: (1) students who may be inspired by this book to go on and master QED, (2) intelligent lay persons curious about QED, and (3) professional physicists like myself.
If you are in (1), you will be so incredibly inspired and fired up by this book that you will want to rush out and start reading a textbook on quantum field theory (and it might as well be Nutshell!) By the way, these days QED is considered a relatively simple example of a quantum field theory. In writing Nutshell I contend that a truly bright undergrad would have a good shot at understanding quantum field theory, and Feynman would surely agree with me.
But as in the analogy, reading this book alone will in no way turn you into a pro. You have to learn what Feynman referred to as the "tricky, efficient way" of multiplying numbers. In spite of Feynman's proclaimed desire to explain everything from scratch, he noticeably runs out of steam as he goes on. For example, on page 89 and in figure 56, he merely describes the bizarre dependence of P(A to B) on the "interval I" and you just have to take his word for it. In Nutshell, this is derived. Similarly for the quantity E(A to B) described in the footnote on page 91.
If you are in (2), persevere and you will be rewarded, trust me. Don't rush. Even if you only get through the first two chapters you would have learned a lot. Why is this book so hard to read? We could go back to the Mayan analogy: it is as if you are teaching someone to multiply by telling him about jars and pebbles but he doesn't even know what a jar or a pebble is. Feynman is bouncing around telling you about each photon carrying a little arrow, and about how you add up these arrows and multiply them, shrinking and rotating them. It is all very confusing; you can't afford even the slightest lapse in attention. Incidentally, the little arrows are just complex numbers (as explained in a footnote on page 63) and if you already know about complex numbers (and jars and pebbles) the discussion might be less confusing. Or perhaps you are one of those lay readers described by Weinberg as typical, who are satisfied with "the illusion of understanding something." In that case, you may be satisfied with a "normal" popular physics book. Again the Mayan analogy: a normal popular physics book would burden you neither with 9 by 9 tables and carrying, nor with jars and pebbles. It might simply say that when given two numbers the high priests have a way of producing another number. In fact, editors of popular physics books insist that authors write like that in order not to scare away the paying public (more below).
Finally, if you are in (3), you are in for a real treat. Even though I am a quantum field theorist and know what Feynman is doing, I still derived great pleasure from seeing familiar phenomenon explained in a dazzlingly original and unfamiliar way. I enjoyed having Feynman explaining to me why light moves in a straight line or how focusing lens really works (on page 58: "A trick can be played on Nature" by slowing light down along certain paths so the little arrows all turn by the same amount!)
Shh, I will tell you why Feynman is different from most physics professors. Go ask a physics professor to explain why in the reflection of light from a pane of glass it suffices to consider reflection from the front surface and the back surface only. Very few would know the answer (see page 104). It is not because physics professors lack knowledge, but because it has never even occurred to them to ask this question. They simply study the standard textbook by Jackson, pass the exam, and move on. Feynman is the pesky kid who is forever asking why why WHY!
With three classes of readers (the aspiring student, the intelligent layperson, the pro) there are also three categories of physics books (not in one-to-one correspondence): textbooks, popular books, and what I might call "extra-difficult popular physics books." This book is a rare example of the third category, in some sense intermediate between a textbook and a popular book. Why is this third category so thinly populated?
Because "extra-difficult popular physics books" scare publishers half to death. Hawking famously said that every equation halves the sale of a popular book. While I do not deny the general truth of this statement I wish that publishers would not be so easily frightened. The issue is not so much the number of equations, but whether popular books could contain honest presentation of difficult concepts. When I wrote Fearful, I thought that to discuss symmetry in modern physics it would be essential to explain group theory. I tried to make the concepts accessible by the use of little tokens: squares and circles with letters inside them. But the editor compelled me to water the discussion down repeatedly until there was practically nothing left, and then to relegate much of what was left to an appendix. Feynman, on the other hand, had the kind of clout that not every physicist writer would have.
Let me return to Feynman's book with its difficult passages. Many of the readers of this book would have had some exposure to quantum physics. They may be legitimately puzzled for example by the absence of the wave function that figures so prominently in other popular discussion of quantum physics. Quantum physics is puzzling enough --- as a wit once said, "With quantum physics, who needs drugs?" Perhaps the reader should be spared further head scratching. So let me explain.
Almost simultaneously but independently, Erwin Schrödinger and Werner Heisenberg invented quantum mechanics. To describe the motion of an electron for example, Schrödinger introduced a wave function governed by a partial differential equation, now known as the Schrödinger equation. In contrast, Heisenberg mystified those around him by talking about operators acting on what he called quantum states. He also famously enunciated the uncertainty principle, which physically states that the more accurately one were to measure say the position of a quantum particle the more uncertain becomes one's knowledge of its momentum, and vice versa.
The formalisms set up by the two men were manifestly different, but the bottom-line result for any physical processes they obtained always agreed. Later, the two formalisms were shown to be completely equivalent. Today, any decent graduate student is expected to pass from one formalism to the other with facility, employing one or the other according to which one is more convenient for the problem at hand.
Six years later, in 1932, Paul Dirac suggested, in a somewhat rudimentary form, yet a third formalism. Dirac's idea appeared to be largely forgotten until 1941 when Feynman developed and elaborated this formalism, which became known as the path integral or sum over history formalism. (Physicists sometimes wonder whether Feynman invented this formalism completely ignorant of Dirac's work. Historians of physics have now established that the answer is no. During a party at a Princeton tavern, a visiting physicist named Herbert Jehle told Feynman about Dirac's idea and apparently the next day Feynman worked out the formalism in real time in front of the awed Jehle. See the 1986 article by S. Schweber in Reviews of Modern Physics.)
It is this formalism that Feynman tries hard to explain to you in this little book. For example, on page 43, when Feynman adds all those arrows he is actually integrating (which of course is calculus jargon for summing) over the amplitudes associated with all possible paths the photon could follow in getting from the point S to the point P. Hence the term “path integral formalism.” The alternative term “sum over history” is also easy to understand. Were the rules of quantum physics relevant to affairs on the macroscopic human scale, then all alternative histories, such as Napoleon triumphing at Waterloo or Kennedy dodging the assassin's bullet, would be possible and each history is associated with an amplitude that we are to sum over (“summing over all those little arrows.”)
It turns out that the path integral, regarded as a function of the final state, satisfies the Schrödinger equation. The path integral is essentially the wave function. Hence the path integral formalism is completely equivalent to the Schrödinger and Heisenberg formalisms. In fact, the one textbook that explains this equivalence clearly was written by Feynman and Hibbs. (Yes, Feynman has also authored textbooks, you know, those boring books that actually tell you how to do things efficiently, like "carrying" and "adding". Also yes, you guessed correctly that Feynman's textbooks are often largely written by his co-authors.)
Since the Dirac-Feynman path integral formalism is completely equivalent to the Heisenberg formalism, it most certainly contains the uncertainty principle. So Feynman's cheerful dismissal of the uncertainty principle on page 56 is a bit of an exaggeration. At the very least one can argue over semantics: what did he mean saying that the uncertainty principle is not "needed"? The real issue is whether or not it is useful.
Theoretical physicists are a notoriously pragmatic lot. They will use whichever method is the easiest. There is none of the mathematicians' petulant insistence on rigor and proof. Whatever works, man!
Given this attitude, you may ask, which of the three formalisms, Schrödinger, Heisenberg, and Dirac-Feynman, is the easiest? The answer depends on the problem. In treating atoms for example, as the master himself admits on page 100, the Feynman diagrams "for these atoms would involve so many straight and wiggly lines and they'd be a complete mess!" The Schrödinger formalism is much easier by a long shot and that is what physicists use. In fact, for most "practical" problems the path integral formalism is almost hopelessly involved, and in some cases downright impossible. I once even asked Feynman about one of these apparently impossible cases and he had no answer. Yet beginning students using the Schrödinger formalism easily solve these apparently impossible cases!
Thus, which formalism is best really depends on the physics problem, so that theoretical physicists in one field, atomic physics for example, might favor one formalism, while those in another, high energy physics for example, might prefer another formalism. Logically then, it may even happen that, as a given field evolves and develops, one formalism may emerge as more convenient than another.
To be specific, let me focus on the field I was trained in, namely high energy or particle physics, which is also Feynman's main field. Interestingly, in particle physics the path integral formalism for a long time ran a distant third in the horse race between the three formalisms. (By the way, nothing says that there could be only three. Some bright young guy could very well come up with a fourth!) In fact, the path integral formalism was so unwieldy for most problems that by the late 1960s it almost fell into complete obscurity. By that time, quantum field theory was almost exclusively taught using the canonical formalism, which is merely another word for the Heisenberg formalism, but the very word "canonical" should tell you which formalism was held in the highest esteem. Just to cite one case history I happen to know well, I never heard of the path integral during my student days, even though I went to two reasonably reputable universities on the east coast for my undergraduate and graduate studies. (I mention the east coast because for all I know it was taught intensively in an eastern enclave in Los Angeles.) It was not until I was a postdoc at the Institute for Advanced Study before I, and most of my colleagues, were first alerted to the path integral formalism by a Russian paper. Even then, various authorities expressed doubts about the formalism, saying for example that it could not account for the chiral anomaly (the reader need not be concerned with what that is.)
Ironically, it was Feynman himself who was responsible for this deplorable state of affairs. What happened was that students easily learned the "funny little diagrams" (such as those on page 116) invented by Feynman. Julian Schwinger once said rather bitterly that "Feynman brought quantum field theory to the masses," by which he meant that any dullard could memorize a few "Feynman rules", call himself or herself a field theorist, and build a credible career. Generations learned Feynman diagrams without understanding field theory. Heavens to Betsy, there are still university professors like that walking around!
But then almost incredibly --- and perhaps this is part of the Feynman mystique that gave his career an almost magical aura, in the early 1970s starting largely with that Russian paper I just mentioned, the Dirac-Feynman path integral made a roaring comeback so that it quickly became the dominant way to make progress in quantum field theory.
Notice that this is also what makes Feynman such an extraordinary physicist: the "battle for the hearts and minds" I just described was between the crowd using Feynman diagrams versus a younger crowd using Feynman path integrals. I hasten to add that the word "battle" is a bit strong: nothing prevents a physicist from using both. I did, for one.
I believe that my recent textbook Nutshell is one of the few that employ the path integral formalism right from the beginning, in contrast to older textbooks that favor the canonical formalism. I started the second chapter with a section titled "The professor's nightmare: a wise guy in the class". In the spirit of all those apocryphal stories about Feynman I made up a story about a wise guy student and named him Feynman. The path integral formalism was derived by the rather Zen procedure of introducing an infinite number of screens, drilling an infinite number of holes in each screen, thus ending up with no screen. But as in the Mayan priesthood analogy, after this Feynmanesque derivation, I had to teach the student how to actually calculate ("carrying" and "adding") and for that I had to abandon the apocryphal Feynman and go through the detailed Dirac-Feynman derivation of the path integral formalism, introducing such technicalities as "the insertion of 1 as a sum over a complete set of bras and kets." Technicality is what you do not get by reading Feynman's books!
Incidentally, in case you are wondering, the bras have nothing to do with the philandering Dick Feynman. They were introduced by the staid and laconic Paul Dirac as the left half of a bracket. Dirac is himself a legend: I once sat through an entire dinner with Dirac and others without him uttering more than a few words.
I chuckled a few times as Feynman got in some sly digs at other physicists. For example, on page 132 he dismissively referred to Murray Gell-Mann, the brilliant physicist and Feynman's friendly rival at Caltech, as a "great inventor." Going somewhat against his own cultivated image, he then deplored on page 135 the general decline of physicists' knowledge of Greek, knowing full well that Gell-Mann not only coined the neologism "gluon" but is also an accomplished linguist.
I also liked Feynman self-deprecatory remarks that are part and parcel of his image. On page 149, when Feynman spoke of "some fool physicist giving a lecture at UCLA in 1983" some readers might not realize that Feynman was speaking of himself! Although this is indeed part of the image, I find it refreshing as theoretical physicists become increasingly hierarchical and pompous in our time. The Feynman whom I knew, and I emphasize that I did not know him well, would surely not like this trend. He once caused a big fuss trying to resign from the National Academy of Sciences.
Referring back to the three classes of potential readers I described above, I would say that those in (2) and (3) will enjoy this book enormously, but the book was secretly written for those in (1). If you are an aspiring theoretical physicist, I urge you to devour this book with all the fiery hunger you feel in your mind, and then go on to learn from a quantum field theory textbook how to actually "carry".
Surely you can master quantum field theory. Just remember what Feynman said: "What one fool can understand, another can." He was referring to himself, and to you! |
42c509779de477e0 | SciELO - Scientific Electronic Library Online
vol.35 issue2AThe classical and commutative limits of noncommutative quantum mechanics: a superstar * Wigner-Moyal equationClassical trajectories and quantum field theory author indexsubject indexarticles search
Home Pagealphabetic serial listing
Services on Demand
Related links
Brazilian Journal of Physics
Print version ISSN 0103-9733On-line version ISSN 1678-4448
Braz. J. Phys. vol.35 no.2a São Paulo June 2005
Determinism and a supersymmetric classical model of quantum fields
Hans-Thomas Elze
Dipartimento di Fisica, Via Filippo Buonarroti 2, I-56127 Pisa, Italia*
A quantum field theory is described which is a supersymmetric classical model. Supersymmetry generators of the system are used to split its Liouville operator into two contributions, with positive and negative spectrum, respectively. The unstable negative part is eliminated by a positivity constraint on physical states, which is invariant under the classical Hamiltonian flow. In this way, the classical Liouville equation becomes a functional Schrödinger equation of a genuine quantum field theory. Thus, 't Hooft's proposal to reconstruct quantum theory as emergent from an underlying deterministic system, is realized here for a field theory. Quantization is intimately related to the constraint, which selects the part of Hilbert space where the Hamilton operator is positive. This is seen as dynamical symmetry breaking in a suitably extended model, depending on a mass scale which discriminates classical dynamics beneath from emergent quantum mechanical behaviour.
In a recent letter, I discussed anew the (dis)similarity between the classical Liouville equation and the Schrödinger equation [1]. In suitable coordinates both appear quite similar, apart from the characteristic doubling of the classical phase space degrees of freedom as compared to the quantum mechanical case. The Liouville operator is Hermitian in the operator approach to classical statistical mechanics developed by Koopman and von Neumann [2]. However, unlike the case of the quantum mechanical Hamiltonian, its spectrum is generally not bounded from below. Therefore, attempts to find a deterministic foundation of quantum theory - based on a relation between the Koopman-von Neumann and quantum mechanical Hilbert spaces and equipped with the corresponding dynamics - must particularly answer the question of how to construct a stable ground state.
Investigations of these problems are to a large extent motivated by work of 't Hooft, who has argued in favour of such model building, in order to gain a fresh look at the persistent clash between general relativity and quantum theory [3]. Besides, since its very beginnings, there have been speculations about the possibility of deriving quantum theory from more fundamental and deterministic dynamical structures. The discourse running from Einstein, Podolsky and Rosen [4] to Bell [5], and involving numerous successors, is well known, debating the (im)possibility of (local) hidden variables theories.
Much of this debate has come under experimental scrutiny in recent years. No disagreement with quantum theory has been observed in the laboratory experiments on scales very large compared to the Planck scale. However, the feasible experiments cannot rule out the possibility that quantum mechanics emerges as an effective theory only on sufficiently large scales and can indeed be based on more fundamental models.
In various examples, the emergence of a Hilbert space structure and unitary evolution in deterministic classical models has been demonstrated in an appropriate large-scale limit. However, in all cases, it is not trivial to assure that a resulting model qualifies as "quantum" by being built on a well-defined groundstate, i.e., with an energy spectrum that is bounded from below.
A class of particularly simple emergent quantum models comprises systems which classically evolve in discrete time steps [3,6]. Employing the path integral formulation of classical mechanics introduced by Gozzi and collaborators [8], it has been shown that actually a large class of classical models turns into unitary quantum mechanical ones, if the Liouville operator governing the statistical evolution is discretized [7]. However, there remains a large arbitrariness in such discretizations, which one would hope to reduce with the help of consistency or symmetry requirements of a more physical theory.
Furthermore, it has been observed that classical systems with Hamiltonians which are linear in the momenta are also suitable for a reformulation in quantum mechanical terms. In order to provide a groundstate for such systems, a new kind of gauge fixing or constraints implementing "information loss" at a fundamental level have been invoked [3,9,10]. Again, a unifying dynamical principle leading to the necessary truncation of the Hilbert space is still missing.
Various other arguments for deterministically induced quantum features have been proposed recently - see works collected in Part III of Ref. [11], for example, or Refs. [12,13], concerning statistical and/or dissipative systems, quantum gravity, and matrix models.
Many of these attempts to base quantum theory on a classical footing can be seen as variants of the earlier stochastic quantization procedures of Nelson [14] and of Parisi and Wu [15], often accompanied by a problematic analytic continuation from imaginary (Euclidean) to real time, in order to describe evolving systems.
In distinction, one may aim at a truly dynamical understanding of the origin of quantum phenomena. In this work, I present a deterministic field theory from which a corresponding quantum theory emerges by constraining the classical dynamics. In particular, I will extend the globally supersymmetric ("pseudoclassical") onedimensional model introduced in Ref. [1] to field theory. Thus, a functional Schrödinger equation is obtained with a positive Hamilton operator, involving the standard scalar boson part in the noninteracting case.
Key ingredient is a splitting of the phase space evolution operator, i.e., of the classical Liouville operator, into positive and negative energy contributions. The latter, which would render the to-be-quantum field theory unstable, are eliminated by imposing a "positivity constraint" on the physical states, employing the Koopman-von Neumann approach [2,16]. The splitting of the evolution operator and subsequent imposition of the constraint makes use of the supersymmetry of the classical system, which furnishes Noether charge densities which are essential here. While, technically, this is analogous to the imposition of the "loss of information" condition in 't Hooft's and subsequent work [3,9,10], it is hoped that the present extension towards interacting fields opens a way to better understand the dynamical origin of such a constraint. While a dissipative information loss mechanism is plausible, alternatively a dynamical symmetry breaking may be considered as the cause.
This paper is organized as follows. In Section II, the (pseudo)classical field theory is introduced and its equations of motion and global supersymmetry derived. Section III is devoted to the statistical mechanics of an ensemble of such systems, its Hilbert space description and Liouville equation, in particular. The Liouville equation is then cast into the form of a functional Schrödinger equation in Section IV. Also the necessary positivity constraint on physical states is discussed, constructed, and incorporated there which turns the emergent Hamiltonian into a positive local operator with a proper quantum mechanical groundstate. In the concluding Section V, I mention some interesting topics for further exploration, especially the relation of the positivity constraint to symmetry breaking.
The following derivation will newly make use of "pseudoclassical mechanics" or, rather, pseudoclassical field theory. These notions have been introduced through the work of Casalbuoni and of Berezin and Marinov, who considered a Grassmann variant of classical mechanics, studying the dynamics of spin degrees of freedom classically and after quantization in the usual way [17].
Classical mechanics based on Grassmann algebras has more recently found much attention in attempts to better understand the zerodimensional limit of classical and quantized supersymmetric field theories, see Refs. [18,19] and further references therein.
Let us introduce a "fermionic" field y, together with a real scalar field f. The former is represented by the nilpotent generators of an infinite dimensional Grassmann algebra [20]. They obey:
where x,x' are coordinate labels in Minkowski space. All elements are real.
Then, the classical model to be studied is defined by the action:
where dots denote time derivatives, and v(f) may be a polynomial in f, for example.
This system apparently has not been studied before, which might be related to the fact that the action is Grassmann odd. However, in line with the present attempt to find a classical foundation of a quantum field theory, no path integral quantization (or other) of the model is intended, which could be obstructed by such a fermionic action.
Introducing canonical momenta,
as usual, one calculates the Hamiltonian,
which turns out to be Grassmann odd as well. Here the first of two useful abbreviations has been introduced: K ºD + m2 + v(f), K' º K + fdv(f)/df.
Hamilton's equations of motion for our model follow:
Combining the equations, one obtains:
i.e., the generally nonlinear field equations, where there is only a parametric coupling between the fields f and y, namely of the former to the latter.
These equations are invariant under the global symmetry transformation,
where e is an infinitesimal real parameter. Associated is the Noether charge:
which is a constant of motion. Similarly, a second global symmetry transformation leaves the system invariant:
with associated conserved Noether charge:
which is the total energy of the classical scalar field, with dV(f)/df º Kf, appropriately taking care of gradient terms by partial integration.
In the following, it will be useful to introduce the Poisson bracket operation acting on two observables A and B, which generally can be function(al)s of the phase space variables f, Pf,y,Py:
where all functional derivatives refer to the same space-time argument and act in the indicated direction; for the fermionic variables this direction is meant to coincide with their left/right-derivative character [18].
Note that {A,B} = –{B,A}, if the derivatives of A and B commute, i.e., if in each contributing term at least one of the two is Grassmann even. Furthermore, for any observable A, the usual relation among time derivatives holds:
which embodies Hamilton's equations of motion.
Naturally, the time independent Hamiltonian of Eq. (4) is conserved by the evolution according to the classical equations of motion.
For the Hamiltonian and Noether charge densities, identified by H º òd3xH(x) and Cj º òd3xCj(x)|j = 1,2, respectively, one finds a local (equal-time) supersymmetry algebra:
In all calculations, eventually arising coincidence limits are assumed to be smooth, since classical fields are involved. Of course, for any one of the constants of motion, A Î {H,C1,C2}, one obtains: {H,A} = = 0.
In the following section, the present analysis is applied to the corresponding phase space representation of an ensemble of systems and, furthermore, developed into an equivalent Hilbert space picture.
A particular example of Eq. (15) is the Liouville equation for a conservative system, such as the model considered in Section II. Considering an ensemble of systems, especially with some distribution over different initial conditions, this equation governs the evolution of its phase space density r:
where a convenient factor i has been introduced, and the Liouville operator is defined by:
These equations summarize the classical statistical mechanics of a conservative system, given the Hamiltonian H in terms of the phase space variables.
Next, let us briefly recall the equivalent Hilbert space formulation developed by Koopman and von Neumann [2]. It will be modified here in a way appropriate for the supersymmetric classical field theory in question.
Two postulates are put forth:
• (A) the phase space density functional can be factorized in the form r º Y*Y;
• (B) the Grassmann valued and, in general, complex state functional Y itself obeys the Liouville Eq. (20).
Furthermore, the complex valued inner product of such state functionals is defined by:
i.e., by functional integration over all phase space variables (fields). However, due to the presence of Grassmann valued variables, the *-operation which defines the dual of a state functional needs special attention and will be discussed shortly.
The above definitions make sense for functionals which suitably generalize the notion of square-integrable functions. In particular, the functional integrals can be treated rigorously by discretizing the system, properly pairing degrees of freedom.
Given the Hilbert space structure, the Liouville operator of a conservative system has to be Hermitian and the overlap áY|Yñ is a conserved quantity. Then, the Liouville equation also applies to r = |Y|2, due to its linearity, and r may be interpreted as replacing the probability density of before [2]. Naturally, this is needed for meaningful phase space expectation values of observables.
Certainly, one is reminded here of the usual quantum mechanical formalism. In order to expose the striking similarity as well as the remaining crucial difference, further transformations of the functional Liouville equation are useful [1].
A Fourier transformation replaces the momentum Py by a second scalar field . Furthermore, define º Pf. Thus, the Eqs. (20)-(21) yield:
where Y is considered as a functional of f, ,y, , and with the emergent "Hamilton operator":
using the abbreviation f · g º òd3x f(x)g(x). Note that the density (x) is Grassmann even.
While the Eq. (23) strongly resembles a functional Schrödinger equation, several comments must be made here which point out its different character.
First of all, following a linear transformation of the scalar field variables, f º (s+k)/ and º (s–k)/ , one finds a "bosonic" kinetic energy term:
which is not bounded from below. Therefore, neglecting the Grassmann variables momentarily, the remaining Hermitian part of the Hamiltonian lacks a lowest energy state, which otherwise could qualify as the emergent quantum mechanical groundstate of the bosonic sector.
Secondly, as could be expected, the fermionic sector reveals a similar problem.
The *-operation mentioned before amounts to complex conjugation for a bosonic state functional, (Y[ ,f])* º Y* [,f], analogously to an ordinary wave function in quantum mechanics. However, based on complex conjugation alone, the fermionic part of the Hamiltonian (24) would not be Hermitian.
Instead, a detailed construction of the inner product for functionals of Grassmann valued fields has been presented in Ref. [21]; see also further examples in Refs. [22]. Considering only the noninteracting case with K' = K, i.e., with v(f) = 0 in Eq. (2), the construction of Floreanini and Jackiw can be directly applied here. Then, the Hermitian conjugate of y is y = dy and of it is = . Furthermore, rescaling ® , the fields and y obtain the same dimensionality. Together, this suffices to render Hermitian the fermionic part of the Hamiltonian (24), which becomes:
In the presence of interactions, with K' ¹ K, additional modifications are necessary and will be considered elsewhere. In any case, although must be (made) Hermitian, its eigenvalues generally will not have a lower bound either.
To summarize, the emergent Hamiltonian tends to be unbounded from below, thus lacking a groundstate. This generic difficulty has been encountered in various attempts to build deterministic quantum models, i.e., classical models which can simultaneously be seen as quantum mechanical ones [3,6,7,9,10]. For the present case, this will be discussed and resolved in Section IV.
To conclude this section, equal-time operator relations for the interacting case are derived here, which are related to the supersymmetry algebra of Eqs. (16)-(19). This is achieved by Fourier transformation of appropriate Poisson brackets, similarly as with the emergent Hamiltonian in Eq. (24) above.
To begin with, the operators corresponding to the Noether densities will be useful. Using Eq. (11) and º Pf, as before, one obtains:
Similarly, one obtains:
which is related to Eq. (13).
Both operators are Grassmann odd and obey:
for j = 1,2. Therefore, they are nilpotent, (x) = 0. This should be compared to Eq. (19), as well as the vanishing commutator:
where [Â,] º ÂÂ. Thus, the emergent theory is local, as expected.
It should be remarked that in all calculations of (anti)commutation relations eventually necessary partial integrations, i.e. shifting of gradients, are justified by smearing with suitable test functions and integrating.
Further relations that correspond to Jacobi identities on the level of the Poisson brackets are interesting. Generally, one has to be careful about extra signs that arise due to the Grassmann valued quantities, as compared to more familiar ones related to real or complex variables [18]. Straightforward calculation gives:
cf. Eqs. (16)-(18); the extra factor i must be attributed to the Fourier transformation that enters between the phase space functions before and the operators here.
Finally, it is noteworthy that a copy of the above operator algebra arises, if one performs the replacements y « and « dy on the operators . This yields the nilpotent operators , instead of the :
with a convenient overall sign introduced in the latter definition. They fullfill the same (anti)commutation relations as in Eqs. (30)-(33).
Finally, also the following local operators commute with the Hamiltonian density:
with [, ] = [, ] = 0. These operators are not nilpotent. Their square, though, is highly singular.
One may complete these considerations with the full set of operators generating the ordinary space-time symmetries of our model. However, they are not believed to play a special role for the considerations of the following section. There, the no-groundstate problem of the emergent Hamiltonian, Eq. (24), will be addressed.
Following Eq. (24), it has been pointed out that the emergent Hamiltonian lacks a proper groundstate, i.e., its spectrum is not bounded from below. This prohibits to interpret the model, as it stands, as a quantum mechanical one already, despite close formal similarities.
In order to overcome this difficulty, the general strategy is to find a positive definite local operator that commutes with the Hamiltonian density, [ (x),(x')] = 0. Then, the Hamiltonian can be split into contributions with positive and negative spectrum:
Here F can be any even function with the property:
for a,b Î R.
The simplest example is F(a) º a2, G º 4. With this, the splitting of is explicitly given by:
i.e., ±(x) = ((x) ± (x))2 /4(x). A quartic polynomial could be used instead, etc. In the absence of further symmetry requirements, or other, from the model under consideration, the simplest splitting will do. It will allow us to obtain a free quantum field theory, in particular, as leading part of the relevant Hamilton operator.
Here, as in the following, a regularization is necessary, in order to give a meaning particularly to some of the squared operators that will keep appearing.
Finally, the spectrum of the Hamiltonian is made bounded from below by imposing the "positivity constraint":
This constraint can be enforced as an initial condition, for example, and is preserved by the evolution, since [+(x), (x)] = 0, by construction. In this way, the physical states of the system are selected which are based on the existence of a quantum mechanical groundstate.
Such a constraint selecting the physical part of the emergent Hilbert space has been earlier discussed in the models of Refs. [3,9,10]. It has been interpreted by 't Hooft as "information loss" at the fundamental level where quantum mechanics may arise from a deterministic theory. However, it seems also quite possible to relate this to a dynamical symmetry breaking phenomenon instead, cf. Section V.
For our field theory, the noninteracting and interacting cases shall now be studied separately in more detail.
A. The noninteracting case
As mentioned before, with v(f) = 0 in Eq. (2), and therefore K' = K = –D + m2, the rescaling ® is useful, and one may consider the set of operators:
with from Eq. (27). These operators fullfill the same operator algebra as discussed in the previous section.
Furthermore, let us consider the Hermitian conjugate operators, in this case based on y = dy and = [21]:
They commute with the Hermitian density (x), and one finds that {i(x), ((x))}+ = 0, together with the corresponding adjoint relation.
Then, also the following Hermitian operators commute with the Hamiltonian density:
These operators are particularly interesting, since they present, in some sense, the "square-root of the harmonic oscillator":
or, rather, since their sum amounts to the Hamiltonian density of two free bosonic quantum fields.
It seems natural now to choose the positive definite local operator of Eq. (40) as:
where x is a dimensionless parameter. This results in the operators of definite sign:
cf. Eqs. (37)-(40).
Setting x = 2 and performing again the linear transformation f º (s+k)/ and º (s–k)/ , previously mentioned after Eqs. (24)-(25), here instead yields the Hamiltonian density:
with from Eq. (27), and where, of course, the linear transformation has also been performed in / . One observes that the only trace of the previous instability is now relegated to this last term, which still involves the scalar field k. The local interactions present in this term certainly have a nonstandard form. Additional parameters playing the role of coupling constants could be introduced by a more complicated splitting of the emergent Hamiltonian, see Eqs. (37)-(40), or a different choice for the operator .
However, the Hamilton operator + has a positive spectrum, by construction, and the leading terms are those of a free bosonic quantum field together with a fermion doublet in the Schrödinger representation. They dominate at low energy.
Similarly, the constraint operator density becomes:
A certain symmetry with Eq. (53) is obvious; note that – = . It suggests to think of the elimination of part of the Hilbert space, Eq. (41), as a dynamical symmetry breaking effect. This point will be briefly addressed in the concluding section.
B. The interacting case
In the interacting case, one has v(f) ¹ 0 in Eq. (2), K ºD + m2 + v(f), and K' º K + fdv(f) /df. While the operator algebra of Section III is available, it is difficult to find the corresponding generalization of the "square-root of the harmonic oscillator" operators of Eqs. (47)-(48).
The latter were most useful, however, in order to obtain a positive definite operator that commutes with the emergent Hamiltonian and, with this, to achieve its splitting into parts with positive and negative spectrum, as in Eqs. (37)-(40). The vanishing commutator here is important, since it assures that this splitting is invariant under evolution of the system.
Furthermore, said operators are particularly interesting, if the resulting bounded Hamilton operator + is to contain leading standard field theory terms, even though modified by additions as in Eq. (53), for example.
Following these remarks, one could try and construct such operators perturbatively, i.e., by deforming the operators, and include step by step increasing orders in the interaction v.
A quite different approach might be to choose:
where is a parameter with dimensions of energy per unit volume. (Note that replacing the Hamiltonian density squared with the total angular momentum density squared would introduce a constant with dimensions of action per unit area.) This operator commutes with the Hamiltonian density and will lead to a positive +. In fact, the resulting contributions to the Hamiltonian are in this case simply given by:
Now, imposing the constraint, Y = 0, one finds that on physical states the bounded Hamilton operator gives:
with º òd3x. A surprisingly restrictive result.
To be sure, if one wants to connect the Hamilton operator + of Eq. (57) to familiar quantum field theories, the difficult task of finding "square-root of the harmonic oscillator" operators reappears. Here one has to find an underlying classical model for which the emergent Hamiltonian , cf. Eqs. (24)-(25), contains terms which are linear in such operators.
The work presented here touches a number of conceptual issues surrounding quantum theory. The interpretation of the measurement process and of the "collapse of the wave function", in particular, must figure prominently in this context, together with the "quantum indeterminism" and the wider philosophical implications of the algorithmic rules comprising quantum theory as a whole [23]. It is left for future studies to find out, how a deterministic framework, such as further elaborated here, allows to see them in a new light.
Deterministic models which simultaneously and consistently can be described as quantum mechanical ones present a challenge to common wisdom concerning the meaning, foundations, and limitations of quantum theory. Main aspects of the present work on such a model taken from field theory can be summarized as follows.
A fairly standard description of the dynamics in phase space and its conversion to an operators-in-Hilbert-space formalism à la Koopman and von Neumann [2] yield a wave functional equation which is surprisingly similar to the functional Schrödinger equation of quantum field theory. However, the emergent "Hamilton operator" of this picture, generically, lacks a groundstate, which corresponds to the spectrum not being bounded from below. In order to arrive at a proper quantum theory with a stable groundstate, parts of the Hilbert space have to be removed by a positivity constraint which is preserved by the Hamiltonian flow.
In the present example, this has been discussed based on simple supersymmetry properties of the underlying classical model. The important role of "square-root of the harmonic oscillator" operators in constructing the constraint operator has been pointed out, and they have been constructed in the limit of classically noninteracting scalar and fermionic fields, the latter being represented by nilpotent Grassmann valued variables. Several comments on the interacting case have been made, where they may be constructed in perturbation theory. In particular, these operators promise to be important in emergent quantum models that smoothly connect to standard field theories with leading quadratic kinetic energy terms.
Here I should like to conclude with a more speculative remark concerning the dynamical origin of the positivity constraint, which has been introduced and interpreted as a "loss of information" at the fundamental dynamical level earlier [3,9,10]. The latter anticipates a still unknown, possibly dissipative information loss mechanism in the classical theory beneath, such as due to an unavoidable coarse-graining in the description of some deterministic chaotic dynamics. This would turn the system under study into an open system.
However, the discussion in Section IV indicates a complementary point of view. There is a great deal of symmetry between the operators + and which are responsible for the evolution of the system as well as for the selection of the physical states. In fact, since the emergent functional wave equation is linear in the time derivative, positive and negative parts of the spectrum of the emergent Hamiltonian , see Eqs. (24)-(25), can be turned into each other by reversing the direction of time. Correspondingly, the roles of + and can be exchanged.
This suggests that giving preference to one over the other in determining the physical states may be a contingent property of the system. It typically occurs in situations where a symmetry is dynamically broken.
Let us consider an extension of the present model which schematically incorporates such an effect. Introducing a local "order parameter" Ô, take the new Hamilton operator density:
with [±(x),Ô(x')] = 0 and, for example, Ô º ()/ or Ô º ()/ . The positive operators ± are as defined in Eqs. (37)-(40), is positive definite, cf. Section IV, and denotes an energy density parameter. All operators here commute.
Therefore, the eigenstates can be ordered according to the eigenvalues of = + or .
For large values of the order parameter, at high energy, loosely speaking, the symmetry is restored and asymptotically * » +. In this regime, the system behaves classically, corresponding to an emergent Hamilton operator with unbounded spectrum. Here, the role of + and could approximately be interchanged by changing the direction of time.
Conversely, for small values of the order parameter, one qualitatively finds * » + + tanh(1) = ( + / )/2 > 0. This result should be compared with Eqs. (51)-(54), for example, and particularly with Eq. (53). Here the spectrum of * is bounded from below and the system behaves quantum mechanically. Interestingly, the backbending of the negative branch of the spectrum to positive values has replaced the imposition of the positivity constraint, Eq. (41).
The precise nature of the transition between classical and quantum regimes, which is regulated by the parameter , depends on how and which order parameter comes into play. Due to its nonlinearity, which introduces higher order functional derivatives, it modifies the underlying phase space dynamics, see Eqs. (20)-(24). It will be interesting to further study such corrections, which must contribute as additional force terms, depending on higher powers of field momentum, for example, to the classical Liouville operator.
Differently from a possible "loss of information" mechanism, presently all operators involved are Hermitian and closely related to the symmetry properties of the system.
Such a symmetry breaking mechanism might be responsible for the emergent quantization also in other cases than the (pseudo)classical field theory presented here. Besides this, models that incorporate interacting fermions and gauge fields are an important topic for future study. Furthermore, time reparametrization or general diffeomorphism invariance should naturally be most interesting to consider in the framework of deterministic quantum models.
I wish to thank A. DiGiacomo for many discussions and kind hospitality at the Dipartimento di Fisica in Pisa and G. Vitiello, M. Blasone, L. Lusanna, E. Sorace, and M. Ciafaloni for helpful remarks and discussions during visits at both, the Dipartimento di Fisica in Salerno and in Firenze.
[1] H.-T. Elze, Phys. Lett. A 335, 258 (2005). [ Links ]
[2] B.O. Koopman, Proc. Nat. Acad. Sci. (USA) 17, 315 (1931); [ Links ]J. von Neumann, Ann. Math. 33, 587 (1932); [ Links ]ibid. 33, 789 (1932).
[3] G. 't Hooft, J. Stat. Phys. 53, 323 (1988); [ Links ]Quantum Mechanics and Determinism, in: Proc. of the Eighth Int. Conf. on "Particles, Strings and Cosmology", ed. by P. Frampton and J. Ng (Rinton Press, Princeton, 2001), p. 275; hep-th/0105105 ; [ Links ]see also: Determinism Beneath Quantum Mechanics, quant-ph/0212095 .
[4] A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47, 777 (1935). [ Links ]
[5] J.S. Bell, "Speakable and Unspeakable in Quantum Mechanics" (Cambridge U. Press, Cambridge, 1987). [ Links ]
[6] H.-T. Elze and O. Schipper, Phys. Rev. D 66, 044020 (2002); [ Links ]H.-T. Elze, Phys. Lett. A 310, 110 (2003). [ Links ]
[7] H.-T. Elze, Physica A 344, 478 (2004); [ Links ]Quantum Mechanics and Discrete Time from "Timeless" Classical Dynamics, in: Ref. [], p. 196; quant-ph/0306096 .
[8] E. Gozzi, M. Reuter, and W.D. Thacker, Phys. Rev. D 40, 3363 (1989); [ Links ]D 46, 757 (1992).
[9] M. Blasone, P. Jizba, and G. Vitiello, Phys. Lett. A 287, 205 (2001); [ Links ]M. Blasone, E. Celeghini, P. Jizba, and G. Vitiello, Phys. Lett. A 310, 393 (2003). [ Links ]
[10] M. Blasone, P. Jizba, and H. Kleinert, Phys. Rev. A, in press, quant-ph/0409021; [ Links ]Braz. J. Phys. 35, 497 (2005). [ Links ]
[11] "Decoherence and Entropy in Complex Systems", ed. by H.-T. Elze, Lecture Notes in Physics, Vol. 633 (Springer-Verlag, Berlin Heidelberg New York, 2004). [ Links ]
[12] L. Smolin, Matrix Models as Non-Local Hidden Variables Theories, hep-th/0201031 ; F. Markopoulou and L. Smolin, Quantum Theory from Quantum Gravity, gr-qc/0311059 .
[13] S.L. Adler, Quantum Mechanics as an Emergent Phenomenon: The Statistical Dynamics of Global Unitary Invariant Matrix Models as the Precursors of Quantum Field Theory (Cambridge U. Press, Cambridge, 2005). [ Links ]
[14] E. Nelson, Phys. Rev. 150, 1079 (1966). [ Links ]
[15] G. Parisi and Y.S. Wu, Sci. Sin. 24, 483 (1981); [ Links ]P.H. Damgaard and H. Hüffel, Phys. Rep. 152, 227 (1987). [ Links ]
[16] Closer inspection shows that the constraint considered in Ref. [1] is not sufficient to turn the classical model there into a quantum mechanical one. Presently, this is solved differently in Section IV.
[17] R. Casalbuoni, Nuovo Cim. 33A, 389 (1976); [ Links ]F.A. Berezin and M.S. Marinov, Ann. Phys. (NY) 104, 336 (1977). [ Links ]
[18] P.G.O. Freund, "Introduction to Supersymmetry" (Cambridge U. Press, Cambridge, 1986); [ Links ]B. DeWitt, "Supermanifolds", 2nd ed. (Cambridge U. Press, Cambridge, 1992). [ Links ]
[19] N.S. Manton, J. Math. Phys. 40, 736 (1999); [ Links ]G. Junker, S. Matthiesen, and A. Inomata, Classical and quasi-classical aspects of supersymmetric quantum mechanics, hep-th/95102230.
[20] Grassmann algebras and analysis over supernumbers are presented in detail by DeWitt [18].
[21] R. Floreanini and R. Jackiw, Phys. Rev. D 37, 2206 (1988). [ Links ]
[22] C. Kiefer and A. Wipf, Ann. Phys. (NY) 236, 241 (1994); [ Links ]A. Duncan, H. Meyer-Ortmanns, and R. Roskies, Phys. Rev. D 36, 3788 (1987). [ Links ]
[23] "Quantum Theory and Beyond", ed. by T. Bastin (Cambridge U. Press, Cambridge, 1971); [ Links ]"Quantum Theory and Measurement", ed. by J.A. Wheeler and W.H. Zurek (Princeton U. Press, Princeton, 1980). [ Links ]
Received on 23 February, 2005
* Permanent address: Instituto de Física, Universidade Federal do Rio de Janeiro C.P. 68.528, 21941-972 Rio de Janeiro, RJ, Brazil
|
fef38f1cc681e9ee | Two-Soliton Collision for the Gross-Pitaevskii Equation in the Causal Interpretation
Under certain simplified assumptions (small amplitudes, propagation in one direction, etc.), various dynamical equations can be solved, for example, the well-known nonlinear Schrödinger equation (NLS), also known as the Gross–Pitaevskii equation. It has a soliton solution, whose envelope does not change in form over time. Soliton waves have been observed in optical fibers, optical solitons being caused by a cancellation of nonlinear and dispersive effects in the medium. When solitons interact with one another, their shapes do not change, but their phases shift. The two-soliton collision shows that the interaction peak is always greater than the sum of the individual soliton amplitudes. The causal interpretation of quantum theory is a nonrelativistic theory picturing point particles moving along trajectories, here, governed by the nonlinear Schrödinger equation. It provides a deterministic description of quantum motion by assuming that besides the classical forces, an additional quantum potential acts on the particle and leads to a time-dependent quantum force . When the quantum potential in the effective potential is negligible, the equation for the force will reduce to the standard Newtonian equations of classical mechanics. In the two-soliton case, only two of the Bohmian trajectories correspond to reality; all the others represent possible alternative paths depending on the initial configuration. The trajectories of the individual solitons show that in the two-soliton collision, amplitude and velocity are exchanged, rather than passing through one another. On the left you can see the position of the particles, the wave amplitude (blue), and the velocity (green). On the right the graphic shows the wave amplitude and the complete trajectories in (, ) space.
• [Snapshot]
• [Snapshot]
• [Snapshot]
With the potential , the Gross–Pitaevskii equation is the nonlinear version of the Schrödinger equation , where is the complex conjugate and the density of the wavefunction.
The exact two-soliton solution is:
, with
and , and here .
There are two ways to derive the velocity equation: (1) directly from the continuity equation, where the motion of the particle is governed by the current flow; and (2) from the eikonal representation of the wave, , where the gradient of the phase is the particle velocity. Therefore, the quantum wave guides the particles. The origin of the motion of the quantum particle is the effective potential , which is the quantum potential plus the potential , . The effective potential is a generalization of the quantum potential in the case of the Schrödinger equation for a free quantum particle, where . The system is time reversible. In the source code the quantum potential is deactivated, because of the excessive computation time.
J. P. Gordon, "Interaction Forces among Solitons in Optical Fibers", Optics Letters, 8(11), 1983 pp. 596–598.
• Share:
Embed Interactive Demonstration New!
Files require Wolfram CDF Player or Mathematica.
Mathematica »
The #1 tool for creating Demonstrations
and anything technical.
Wolfram|Alpha »
Explore anything with the first
computational knowledge engine.
MathWorld »
The web's most extensive
mathematics resource.
Course Assistant Apps »
An app for every course—
right in the palm of your hand.
Wolfram Blog »
Read our views on math,
science, and technology.
Computable Document Format »
The format that makes Demonstrations
(and any information) easy to share and
interact with.
STEM Initiative »
Programs & resources for
educators, schools & students.
Computerbasedmath.org »
Join the initiative for modernizing
math education.
Step-by-Step Solutions »
Wolfram Problem Generator »
Wolfram Language »
Knowledge-based programming for everyone.
Download or upgrade to Mathematica Player 7EX
I already have Mathematica Player or Mathematica 7+ |
3488c378f65fbe1b | Using Swedenborg to Understand the Quantum World III: Thoughts and Forms
Swedenborg Foundation
In this series of posts, Swedenborg’s theory of correspondences has been shown to have interesting applications for helping us to better understand the quantum world.
In part I, we learned that our mental processes occur at variable finite intervals and that they consist of desire, or love, acting by means of thoughts and intentions to produce physical effects. We in turn came to see the correspondential relationship between these mental events and such physical events that occur on a quantum level: in both cases, there will be time gaps between the events leading up to the physical outcome. So since we find that physical events occur in finite steps rather than continuously, we are led to expect a quantum world rather than a world described by classical physics.
In part II, we saw that the main similarity between desire (mental) and energy (physical) is that they both persist between events, which means that they are substances and therefore have the capability, or disposition, for action or interaction within the time gaps between those events.
Now we come to the question of how it is that these substances persist during the intervals between events. The events are the actual selection of what happens, so after the causing event and before the resultant effect, what occurs is the exploring of “possibilities for what might happen.” With regard to our mental processes, this exploration of possibilities is what we recognize as thinking. Swedenborg explains in detail how this very process of thinking is the way love gets ready to do things (rather than love being a byproduct of the thinking process, as Descartes would require):
Everyone sees that discernment is the vessel of wisdom, but not many see that volition is the vessel of love. This is because our volition does nothing by itself, but acts through our discernment. It first branches off into a desire and vanishes in doing so, and a desire is noticeable only through a kind of unconscious pleasure in thinking, talking, and acting. We can still see that love is the source because we all intend what we love and do not intend what we do not love. (Divine Love and Wisdom §364)
When we realize we want something, the next step is to work out how to do it. We first think of the specific objective and then of all the intermediate steps to be taken in order to achieve it. We may also think about alternative steps and the pros and cons of following those different routes. In short, thinking is the exploration of “possibilities for action.” As all of this thinking speaks very clearly to the specific objective at hand, it can be seen as supporting our motivating love, which is one of the primary functions of thought. A focused thinking process such as this can be seen, simplified, in many kinds of animal activities.
With humans, however, thinking goes beyond that tight role of supporting love and develops a scope of its own. Not only do our thoughts explore possibilities for action, but they also explore the more abstract “possibilities for those possibilities.” Not only do we think about how to get a drink, but we also, for example, think about the size of the container, how much liquid it contains, and how far it is from where we are at that moment! When we get into such details as volume and distance, we discover that mathematics is the exploration of “possibilities of all kinds,” whether they are possibilities for action or not. So taken as a whole, thought is the exploration of all the many possibilities in the world, whether or not they are for action and even whether or not they are for actual things.
For physical things (material objects), this exploration of possibilities is spreading over the possible places and times for interactions or selections. Here, quantum physics has done a whole lot of work already. Physicists have discovered that the possibilities for physical interactions are best described by the wave function of quantum mechanics. The wave function describes all the events that are possible, as well as all the propensities and probabilities for those events to happen. According to German physicist Max Born, the probability of an event in a particular region can be determined by an integral property of the wave function over that region. Energy is the substance that persists between physical events, and all physical processes are driven by energy. In quantum mechanics, this energy is what is responsible for making the wave function change through time, as formulated by the Schrödinger equation, which is the fundamental equation of quantum physics.[1]
Returning now to Swedenborg’s theory of correspondences, we recognize that the something physical like thoughts in the mind are the shapes of wave functions in quantum physics. In Swedenborg’s own words:
When I have been thinking, the material ideas in my thought have presented themselves so to speak in the middle of a wave-like motion. I have noticed that the wave was made up of nothing other than such ideas as had become attached to the particular matter in my memory that I was thinking about, and that a person’s entire thought is seen by spirits in this way. But nothing else enters that person’s awareness then apart from what is in the middle which has presented itself as a material idea. I have likened that wave round about to spiritual wings which serve to raise the particular matter the person is thinking about up out of his memory. And in this way the person becomes aware of that matter. The surrounding material in which the wave-like motion takes place contained countless things that harmonized with the matter I was thinking about. (Arcana Coelestia §6200)[2]
Many people who have tried to understand the significance of quantum physics have noted that the wave function could be described as behaving like a non-spatial realm of consciousness. Some of these people have even wanted to say that the quantum wave function is a realm of consciousness, that physics has revealed the role of consciousness in the world, or that physics has discovered quantum consciousness.[3] However, using Swedenborg’s ideas to guide us, we can see that the wave function in physics corresponds to the thoughts in our consciousness. They have similar roles in the making of events: both thoughts and wave functions explore the “possibilities, propensities, and probabilities for action.” They are not the same, but they instead follow similar patterns and have similar functions within their respective realms. Thoughts are the way that desire explores the possibilities for the making of intentions and their related physical outcomes, and wave functions are the way that energy explores the possibilities for the making of physical events on a quantum level.
The philosophers of physics have been puzzled for a long time about the substance of physical things,[4] especially that of things in the quantum realm. From our discussion here, we see that energy (or propensity) is also the substance of physical things in the quantum realm and that the wave function, then, is the form that such a quantum substance takes. The wave function describes the shape of energy (or propensity) in space and time. We can recognize, as Aristotle first did, that a substantial change has occurred when a substance comes into existence by virtue of the matter of that substance acquiring some form.[5] That still applies to quantum mechanics, we now find, even though many philosophers have been desperately constructing more extreme ideas to try to understand quantum objects, such as relationalism[6] or the many-worlds interpretation.[7]
So what, then, is this matter of energy (desire, or love)? Is it from the Divine? Swedenborg would say as much:
It is because the very essence of the Divine is love and wisdom that we have two abilities of life. From the one we get our discernment, and from the other volition. Our discernment is supplied entirely by an inflow of wisdom from God, while our volition is supplied entirely by an inflow of love from God. Our failures to be appropriately wise and appropriately loving do not take these abilities away from us. They only close them off; and as long as they do, while we may call our discernment “discernment” and our volition “volition,” essentially they are not. So if these abilities really were taken away from us, everything human about us would be destroyed—our thinking and the speech that results from thought, and our purposing and the actions that result from purpose. We can see from this that the divine nature within us dwells in these two abilities, in our ability to be wise and our ability to love. (Divine Love and Wisdom §30)
When seeing things as made from substance—from the energy (or desire) that endures between events and thereby creates further events—we note that people will tend to speculate about “pure love” or “pure energy”: a love or energy without form that has no particular objective but can be used for anything. But this cannot be. In physics, there never exists any such pure energy but only energy in specific forms, such as the quantum particles described by a wave function. Any existing physical energy must be the propensity for specific kinds of interactions, since it must exist in some form. Similarly, there never exists a thing called “pure love.” The expression “pure love” makes sense only with respect to the idea of innocent, or undefiled, love, not to love without an object. Remember that “our volition [which is the vessel of love] does nothing by itself, but acts through our discernment.”
[1] Wikipedia,ödinger_equation.
[2] Secrets of Heaven is the New Century Edition translation of Swedenborg’s Arcana Coelestia.
[3] See, for example,
[4] Howard Robinson, “Substance,” Stanford Encyclopedia of Philosophy,
[5] Thomas Ainsworth, “Form vs. Matter,” Stanford Encyclopedia of Philosophy,
[6] Michael Epperson, “Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse,” Process Studies 38.2 (2009): 339–366.
[7] J. A. Barrett, “Quantum Worlds,” Principia 20.1 (2016): 45–60.
Read more posts from the Scholars on Swedenborg series >
What Quantum Physics Can Tell Us about the Afterlife
Swedenborg Foundation
Oh, wait. That’s Emanuel Swedenborg.
Take the principle of quantum entanglement, for example:
Swedenborg says something very similar about the spiritual universe:
Using Swedenborg to Understand the Quantum World I: Events
Swedenborg Foundation
Spiritual Natural
Swedenborg summarizes the relationship between these elements as follows:
I claim we do recognize them in physics:
Continue with Part II: Desire and Energy>
Read more posts from the Scholars on Swedenborg series > |
ed94ffba46cec983 | The imaginary part of quantum mechanics really exists!
Credit: Source: USTC
For almost a century, physicists have been intrigued by the fundamental question: why are complex numbers so important in quantum mechanics, that is, numbers containing a component with the imaginary number i? Usually, it was assumed that they are only a mathematical trick to facilitate the description of phenomena, and only results expressed in real numbers have a physical meaning. However, a Polish-Chinese-Canadian team of researchers has proved that the imaginary part of quantum mechanics can be observed in action in the real world.
We need to significantly reconstruct our naive ideas about the ability of numbers to describe the physical world. Until now, it seemed that only real numbers were related to measurable physical quantities. However, research conducted by the team of Dr. Alexander Streltsov from the Centre for Quantum Optical Technologies (QOT) at the University of Warsaw with the participation of scientists from the University of Science and Technology of China (USTC) in Hefei and the University of Calgary, found quantum states of entangled photons that cannot be distinguished without resorting to complex numbers. Moreover, the researchers also conducted an experiment confirming the importance of complex numbers for quantum mechanics. Articles describing the theory and measurements have just appeared in the journals Physical Review Letters and Physical Review A.
“In physics, complex numbers were considered to be purely mathematical in nature. It is true that although they play a basic role in quantum mechanics equations, they were treated simply as a tool, something to facilitate calculations for physicists. Now, we have theoretically and experimentally proved that there are quantum states that can only be distinguished when the calculations are performed with the indispensable participation of complex numbers,” explains Dr. Streltsov.
Complex numbers are made up of two components, real and imaginary. They have the form a + bi, where the numbers a and b are real. The bi component is responsible for the specific features of complex numbers. The key role here is played by the imaginary number i, i.e. the square root of -1.
There is nothing in the physical world that can be directly related to the number i. If there are 2 or 3 apples on a table, this is natural. When we take one apple away, we can speak of a physical deficiency and describe it with the negative integer -1. We can cut the apple into two or three sections, obtaining the physical equivalents of the rational numbers 1/2 or 1/3. If the table is a perfect square, its diagonal will be the (irrational) square root of 2 multiplied by the length of the side. At the same time, with the best will in the world, it is still impossible to put i apples on the table.
The surprising career of complex numbers in physics is related to the fact that they can be used to describe all sorts of oscillations much more conveniently than with the use of popular trigonometric functions. Calculations are therefore carried out using complex numbers, and then at the end only the real numbers in them are taken into account.
Compared to other physical theories, quantum mechanics is special because it has to describe objects that can behave like particles under some conditions, and like waves in others. The basic equation of this theory, taken as a postulate, is the Schrödinger equation. It describes changes in time of a certain function, called the wave function, which is related to the probability distribution of finding a system in a specific state. However, the imaginary number i openly appears next to the wave function in the Schrödinger equation.
“For decades, there has been a debate as to whether one can create coherent and complete quantum mechanics with real numbers alone. So, we decided to find quantum states that could be distinguished from each other only by using complex numbers. The decisive moment was the experiment where we created these states and physically checked whether they were distinguishable or not,” says Dr. Streltsov, whose research was funded by the Foundation for Polish Science.
The experiment verifying the role of complex numbers in quantum mechanics can be presented in the form of a game played by Alice and Bob with the participation of a master conducting the game. Using a device with lasers and crystals, the game master binds two photons into one of two quantum states, absolutely requiring the use of complex numbers to distinguish between them. Then, one photon is sent to Alice and the other to Bob. Each of them measures their photon and then communicates with the other to establish any existing correlations.
“Let’s assume Alice and Bob’s measurement results can only take on the values of 0 or 1. Alice sees a nonsensical sequence of 0s and 1s, as does Bob. However, if they communicate, they can establish links between the relevant measurements. If the game master sends them a correlated state, when one sees a result of 0, so will the other. If they receive an anti-correlated state, when Alice measures 0, Bob will have 1. By mutual agreement, Alice and Bob could distinguish our states, but only if their quantum nature was fundamentally complex,” says Dr. Streltsov.
An approach known as quantum resource theory was used for the theoretical description. The experiment itself with local discrimination between entangled two-photon states was carried out in the laboratory at Hefei using linear optics techniques. The quantum states prepared by the researchers turned out to be distinguishable, which proves that complex numbers are an integral, indelible part of quantum mechanics.
The achievement of the Polish-Chinese-Canadian team of researchers is of fundamental importance, but it is so profound that it may translate into new quantum technologies. In particular, research into the role of complex numbers in quantum mechanics can help to better understand the sources of the efficiency of quantum computers, qualitatively new computing machines capable of solving some problems at speeds unattainable by classical computers.
The Centre for Quantum Optical Technologies at the University of Warsaw (UW) is a unit of the International Research Agendas program implemented by the Foundation for Polish Science from the funds of the Intelligent Development Operational Programme. The seat of the unit is the Centre of New Technologies at the University of Warsaw. The unit conducts research on the use of quantum phenomena such as quantum superposition or entanglement in optical technologies. These phenomena have potential applications in communications, where they can ensure the security of data transmission, in imaging, where they help to improve resolution, and in metrology to increase the accuracy of measurements. The Centre for Quantum Optical Technologies at the University of Warsaw is actively looking for opportunities to cooperate with external entities in order to use the research results in practice.
Dr. Alexander Streltsov
Centre for Quantum Optical Technologies, University of Warsaw
tel.: +48 22 5543792
email: [email protected]
“Operational Resource Theory of Imaginarity”
K.-D. Wu, T. V. Kondra, S. Rana, C. M. Scandolo, G.-Y. Xiang, Ch.-F. Li, G.-C. Guo, A. Streltsov
Physical Review Letters 126, 090401 (2021)
DOI: 10.1103/PhysRevLett.126.090401
“Resource theory of imaginarity: Quantification and state conversion”
Physical Review A 103, 032401 (2021)
DOI: 10.1103/PhysRevA.103.032401
The website of the Centre for Quantum Optical Technologies, University of Warsaw.
Media Contact
Dr. Alexander Streltsov
[email protected]
Related Journal Article
Leave A Reply
Your email address will not be published. |
7b70a04cf419e9d4 | Virtual Winter School on Computational Chemistry
Cecam Logo
From the Schrödinger Equation to the Dirac Equation and Beyond: Are Relativistic Effects Important for Chemistry?
Peter Schwerdtfeger
Centre of Theoretical Chemistry and Physics, Institute of Fundamental Sciences, Massey University (Albany Campus), Auckland, New Zealand.
Video Recording
Paul Dirac stated in 1929 that Relativity gives rise to difficulties only when high-speed particles are involved, and are therefore of no importance in the consideration of atomic and molecular structure and ordinary chemical reactions. Only in the last few decades has it become clear that relativistic effects are not small and are responsible for a number of anomalies observed for heavy element containing molecules or the solid state. To include such effects, one has to go beyond the Schrödinger equation to its relativistic extension, the Dirac equation or approximative two-component forms. This required a major shift in quantum chemistry as one had to learn how to deal with the unpleasant features of the Dirac equation. This lecture gets you into the world of Einstein’s relativity and relativistic quantum chemistry, and its implications to the heavy elements and even to the newly synthesized superheavy elements. It will explain why mercury is a liquid at room temperature, why gold has a yellow colour, why lead batteries do not work in a nonrelativistic world, and why superheavy rare gas oganesson is rare but not a gas.
P. Pyykkö, Relativistic Effects in Chemistry: More Common Than You Thought, Annual Review of Physical Chemistry 63, 45-64 (2012).
M. Reiher, A. Wolf, Relativistic Quantum Chemistry, Wiley-VCH, Weinheim (2009).
K. G. Steenbergen, E. Pahl P. Schwerdtfeger, Accurate, Large-Scale Density Functional Melting of Hg: Relativistic Effects Decrease Melting Temperature by 160 K, J. Phys. Chem. Lett. 8, 1407-1412 (2017).
S. A. Giuliani, Z. Matheson, W. Nazarewicz, E. Olsen, P.-G. Reinhard, J. Sadhukhan, B. Schuetrumpf, N. Schunck, P. Schwerdtfeger, Oganesson and beyond, Rev. Mod. Phys. 91, 011001-1-25 (2019). |
beb1178669c86ab7 | Which Font Has Zero With Slash Through?
What font has a line through zero?
Typefaces commonly found on personal computers that use the slashed zero include:Terminal in Microsoft’s Windows line.Consolas in Microsoft’s Windows Vista, Windows 7, Microsoft Office 2007 and Microsoft Visual Studio 2010.Menlo in macOS.Monaco in macOS.SF Mono in macOS.More items….
What is Ø in engineering?
In engineering drawings that symbol is used to denote diameter of circles in whatever the length unit of the drawing is (typically inches or mm) Form Wikipedia: Diameter symbol[edit ] http://en.wikipedia.org/wiki/File:Technical_Drawing_Hole_01.png. Sign ⌀ in a technical drawing.
What is a Ö called?
In many languages, the letter “ö”, or the “o” modified with an umlaut, is used to denote the non-close front rounded vowels [ø] or [œ]. … In languages without such vowels, the character is known as an “o with diaeresis” and denotes a syllable break, wherein its pronunciation remains an unmodified [o].
How do you make a slashed zero in Excel?
There are several ways you can go about using the slashed zeroes. The first is to insert the Alt+216 symbol, which is a capital O with a slash through it.
How do you type special O characters?
Press the Alt key, and hold it down. While the Alt key is pressed, type the sequence of numbers (on the numeric keypad) from the Alt code in the above table….Alt 167‡CharacterSequenceÐAlt 0208ÒAlt 0210ÓAlt 0211ÔAlt 021240 more rows
How do you write 0 instead of o?
There is of course the obvious difference in meaning: zero is used in writing numbers, and capital O is used in writing words. On an old fashioned typewriter keyboard, there is no zero (and no numeral one). To type a zero, use a capital O (and a lower case L to type a one).
How do you say Ø?
The vowel Y is pronounced like the “y” in syrup. Æ is pronounced like the “a” in the word sad, Ø sounds like the “u” in the word burn, and Å sounds like the “o” in born. Proper pronunciation is one of the keys to speaking the language correctly so people can understand you.
What does φ mean?
The lowercase letter φ (or often its variant, ϕ) is often used to represent the following: Magnetic flux in physics. The letter phi is commonly used in physics to represent wave functions in quantum mechanics, such as in the Schrödinger equation and bra–ket notation: . The golden ratio.
What does this symbol mean Ø?
The letter “Ø” is sometimes used in mathematics as a replacement for the symbol “∅” (Unicode character U+2205), referring to the empty set as established by Bourbaki, and sometimes in linguistics as a replacement for same symbol used to represent a zero. … Slashed zero is an alternate glyph for the zero character.
How do I type an O with a slash through it?
ø = Hold down the Control and Shift keys and type a / (slash), release the keys, and type an o. Ø = Hold down the Control and Shift keys and type a / (slash), release the keys, hold down the Shift key and type an O.
Do you put a line through a zero or an O?
The slash is drawn through the zero to distinguish it from the letter ‘O’.
How do you type O different?
Example 1: To type the letter ó, hold down the Control key, then press the apostrophe key. Release both keys and type o. The accented letter should appear. Example 2: To type the letter Ó, hold down the Control key, then press the apostrophe key.
How do you write ō?
For example, to type ā (a with macron), press Alt + A ; to type ō (o with macron), press Alt + O . Stop the mouse over each button to learn its keyboard shortcut. Shift + click a button to insert its upper-case form. Alt + click a button to copy a single character to the clipboard.
How do you write zero in Roman numerals?
The number zero does not have its own Roman numeral, but the word nulla (the Latin word meaning “none”) was used by medieval scholars in lieu of 0. Dionysius Exiguus was known to use nulla alongside Roman numerals in 525.
What does a circle with a slash through it mean?
Trusted Member. Circle with slash means you tapped the volume button and then selected “None”. It means you are in silent mode. |
8349f671afc212be | 9360-0130/02 – Introduction to Quantum Physic and Chemistry Theory (KFCH)
Gurantor departmentCNT - Nanotechnology CentreCredits5
Subject guarantorprof. Ing. Jana Seidlerová, CSc.Subject version guarantorprof. Ing. Jana Seidlerová, CSc.
Study levelundergraduate or graduateRequirementCompulsory
Study languageCzech
Year of introduction2019/2020Year of cancellation
Intended for the facultiesFMTIntended for study typesBachelor
Instruction secured by
LoginNameTuitorTeacher giving lectures
ALE02 Doc. Dr. RNDr. Petr Alexa
KAL0063 prof. RNDr. René Kalus, Ph.D.
SEI40 prof. Ing. Jana Seidlerová, CSc.
VIT0060 Mgr. Aleš Vítek, Ph.D.
Extent of instruction for forms of study
Form of studyWay of compl.Extent
Full-time Credit and Examination 3+1
Subject aims expressed by acquired skills and competences
To acquaint the student with the fundamentals of the quantum physics and chemistry theory. To clarify the behaviour of the elementary particles and atoms and explain the nature of the chemical bond from the point of view of the quantum theory. After the completion of the course, the student can work with basic operators, is able to define the process of energy calculation of the multi-electron atoms and molecules. Student is also able to explain the fundamentals of the electron and molecular spectra.
Teaching methods
Project work
Předmět navazuje na znalosti studenta ze základních bakalářských kurzů matematiky, fyziky a chemie. Jeho cílem je seznámit studenty se základy nerelativistické kvantové fyziky a chemie a důležitými aplikacemi.
Compulsory literature:
HOUSE, J., E.: Fundamentals of Quantum Chemistry, Elsevier, 2004. ISBN: 0123567718. AZABO, A., OSTLUND, N.S.: Modern Quantum chemistry, Dover Publications, INC, Mineola, New York, 1989. D. A. McQuarrie, J. D. Simon, Physical chemistry: a molecular approach, University Science Books, 1997. ISBN: 978-0-935702-99-6.
Recommended literature:
SAKURAI, J. J.: Modern Quantum mechanics, Benjamin/Cummings, Calif. 1985. MERZBACHER, E.: Quantum mechanics, Wiley, New York 1970. MERZBACHER, E.: Quantum mechanics, John Wiley & Sons, NY, 1998. ISBN.
Way of continuous check of knowledge in the course of semester
Written and oral.
Other requirements
There are no further requirements for the student.
Subject has no prerequisities.
Subject has no co-requisities.
Subject syllabus:
Quantum Physics - Introduction, historical context, new theory. The postulates of quantum mechanics, Schrödinger equation. - Mathematics - operators, hermiteovské linear operators, variables, measurability. - Free particle, wave balls principle of uncertainty - Models of applications stationary Schrödinger equation. - Harmonic oscillator in a coordinate and Fock representation. - The atom of hydrogen, the Pauli principle. Atoms with more electrons. - Interpretation of quantum mechanics. Quantum Chemistry - Multi electron atoms, interactions in a multi-electron atom. Spin-orbital interactions. The Vector model of the atom. Structure of the spectral terms. - Schrödinger equation, Hamiltonian and wave function of multi electron atoms. Wave Function Design. Atom of helium. Basic approximation in chemical bond theory - Approximate methods of solving the Schrödinger equation. The Perturbation Theory and the Variation method of calculation. Calculation of energy value and wavelength development coefficients. - Establishment of chemical bond, conditions of origin and description of chemical bond. Weaknesses of classical theories of chemical bond. Access to quantum chemistry. Molecular Schrödinger equation, Hamiltonian shape and wave functions of molecule. - Basic approximations in chemical bond theory. Theory of resonance and its consequences. The theory of valency bonds. Examples of applications on specific compounds. - The theory of hybridization and creation of wave functions of individual orbits. Examples. The theory of linear combination of atomic orbits. Basic elements of symmetry and their significance in quantum chemistry of chemical bonds. - Molecule as a solid rotor, harmonic and anharmonic oscillator, description and consequences of solution, vibrational and rotational quantum numbers. The practical significance of quantum chemistry.
Conditions for subject completion
Task nameType of taskMax. number of points
(act. for subtasks)
Min. number of points
Credit and Examination Credit and Examination 100 (100) 51
Credit Credit 40 21
Examination Examination 60 30
Mandatory attendence parzicipation: Participation in seminars (80%) and passing all tests at specified dates.
Show history
Occurrence in study plans
Academic yearProgrammeField of studySpec.ZaměřeníFormStudy language Tut. centreYearWSType of duty
2021/2022 (B0719A270001) Nanotechnology P Czech Ostrava 3 Compulsory study plan
2020/2021 (B0719A270001) Nanotechnology P Czech Ostrava 3 Compulsory study plan
2019/2020 (B0719A270001) Nanotechnology P Czech Ostrava 3 Compulsory study plan
Occurrence in special blocks
Block nameAcademic yearForm of studyStudy language YearWSType of blockBlock owner |
6f994d391338fec1 | Download Modern Physics
yes no Was this document useful for you?
Thank you for your participation!
Document related concepts
Elementary particle wikipedia, lookup
Molecular Hamiltonian wikipedia, lookup
Bell's theorem wikipedia, lookup
Quantum teleportation wikipedia, lookup
Density matrix wikipedia, lookup
Quantum entanglement wikipedia, lookup
Many-worlds interpretation wikipedia, lookup
Wheeler's delayed choice experiment wikipedia, lookup
History of quantum field theory wikipedia, lookup
Quantum electrodynamics wikipedia, lookup
Propagator wikipedia, lookup
Atomic theory wikipedia, lookup
Coherent states wikipedia, lookup
Identical particles wikipedia, lookup
Ensemble interpretation wikipedia, lookup
Max Born wikipedia, lookup
Renormalization group wikipedia, lookup
Measurement in quantum mechanics wikipedia, lookup
Erwin Schrödinger wikipedia, lookup
Canonical quantization wikipedia, lookup
Quantum state wikipedia, lookup
Hydrogen atom wikipedia, lookup
Path integral formulation wikipedia, lookup
T-symmetry wikipedia, lookup
Symmetry in quantum mechanics wikipedia, lookup
EPR paradox wikipedia, lookup
Dirac equation wikipedia, lookup
Interpretations of quantum mechanics wikipedia, lookup
Particle in a box wikipedia, lookup
Schrödinger equation wikipedia, lookup
Hidden variable theory wikipedia, lookup
Bohr–Einstein debates wikipedia, lookup
Copenhagen interpretation wikipedia, lookup
Double-slit experiment wikipedia, lookup
Probability amplitude wikipedia, lookup
Relativistic quantum mechanics wikipedia, lookup
Wave function wikipedia, lookup
Wave–particle duality wikipedia, lookup
Matter wave wikipedia, lookup
Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup
Modern Physics
lecture 3
Louis de Broglie
1892 - 1987
Wave Properties of Matter
In 1923 Louis de Broglie postulated that perhaps matter
exhibits the same “duality” that light exhibits
Perhaps all matter has both characteristics as well
Previously we saw that, for photons,
E hf h
c
Which says that the wavelength of light is related to its
Making the same comparison for matter we find…
p mv
Quantum mechanics
Wave-particle duality
Waves and particles have interchangeable properties
This is an example of a system with complementary
The mechanics for dealing with systems
when these properties become important is
called “Quantum Mechanics”
The Uncertainty Principle
Measurement disturbes the system
The Uncertainty Principle
Classical physics
Measurement uncertainty is due to limitations of the measurement
There is no limit in principle to how accurate a measurement can
be made
Quantum Mechanics
There is a fundamental limit to the accuracy of a measurement
determined by the Heisenberg uncertainty principle
If a measurement of position is made with precision Dx and a
simultaneous measurement of linear momentum is made with
precision Dp, then the product of the two uncertainties can never be
less than h/4p
DxDpx / 2
The Uncertainty Principle
In other words:
It is physically impossible to measure simultaneously the exact
position and linear momentum of a particle
These properties are called “complementary”
That is only the value of one property can be known at a time
Some examples of complementary properties are
Which way / Interference in a double slit experiment
Position / Momentum (DxDp > h/4p)
Energy / Time (DEDt > h/4p)
Amplitude / Phase
Schrödinger Wave Equation
The Schrödinger wave equation is one of the most
powerful techniques for solving problems in
quantum physics
In general the equation is applied in three
dimensions of space as well as time
For simplicity we will consider only the one
dimensional, time independent case
The wave equation for a wave of displacement y
and velocity v is given by
y 1 y
2 2
v t
Erwin Schrödinger
1887 - 1961
Solution to the Wave equation
We consider a trial solution by substituting
y (x, t ) = y (x ) sin(w t )
into the wave equation
2 y 1 2 y
2 2
v t
• By making this substitution we find that
2ψ
2 ψ
• Where w /v = 2p/
• Thus
w 2/ v 2 (2p/)2
p = h/
Energy and the Schrödinger Equation
Consider the total energy
Total energy E = Kinetic energy + Potential Energy
E = m v 2/2 +U
E = p 2/(2m ) +U
Reorganise equation to give
p 2 = 2 m (E - U )
ω 2 2m
2 E U
From equation on previous slide we get
• Going back to the wave equation we have
2ψ 2m
2 E U ψ 0
• This is the time-independent Schrödinger wave
equation in one dimension
Wave equations for probabilities
In 1926 Erwin Schroedinger proposed a wave
equation that describes how matter waves (or the
wave function) propagate in space and time
2 ( E U )y
The wave function contains all of the information
that can be known about a particle
Solution to the SWE
The solutions y(x) are called the STATIONARY
STATES of the system
The equation is solved by imposing BOUNDARY
The imposition of these conditions leads naturally
to energy levels
If we set U
4πε r
0
We get the same results as Bohr for the energy levels of the
one electron atom
The SWE gives a very general way of solving problems in
quantum physics
Wave Function
In quantum mechanics, matter waves are
described by a complex valued wave function, y
The absolute square gives the probability of
finding the particle at some point in space
y y *y
This leads to an interpretation of the double slit
Interpretation of the Wavefunction
Max Born suggested that y was the PROBABILITY
AMPLITUDE of finding the particle per unit volume
|y |2 dV = y y * dV
(y * designates complex conjugate) is the probability of
finding the particle within the volume dV
The quantity |y |2 is called the PROBABILITY
Since the chance of finding the particle somewhere in
space is unity we have
ψ ψ* dV ψ
dV 1
• When this condition is satisfied we say that the wavefunction
Max Born
Probability and Quantum Physics
In quantum physics (or quantum mechanics) we
deal with probabilities of particles being at some
point in space at some time
We cannot specify the precise location of the
particle in space and time
We deal with averages of physical properties
Particles passing through a slit will form a
diffraction pattern
Any given particle can fall at any point on the
receiving screen
It is only by building up a picture based on many
observations that we can produce a clear
diffraction pattern
Wave Mechanics
We can solve very simple problems in quantum
physics using the SWE
This is sometimes called WAVE MECHANICS
There are very few problems that can be solved
Approximation methods have to be used
The simplest problem that we can solve is that of a
particle in a box
This is sometimes called a particle in an infinite
potential well
This problem has recently become significant as it
can be applied to laser diodes like the ones used in
CD players
Wave functions
The wave function of a free particle moving
along the x-axis is given by
2px
y x A sin
A sin kx
This represents a snap-shot of the wave
function at a particular time
We cannot, however, measure y, we can
only measure |y|2, the probability density |
72563f0268de73eb | Herb Zinser explains the Alan Sokal science wars of atoms, math equations, biochemistry molecules, television photons, English language nouns and symbol life. Nature's military SYMBOL MACHINE set-up of the New York Times, Duke University, etc.
The Argonne National Labs brain electron orbital war report from the brain cell WALL --> WALL Street Journal intellectual undercover agents
Permalink 12/06/13 21:51, by HerbZinser, Categories: Uncategorized
The odd and even strange battles of the hidden Science data streams of consciousness World War.
Argonne National Labs in Illinois is comprised of atomic, bio-physics humanoids engaged in various atomic brain expressions ....in the external, visible worlds of physics, chemistry, and engineering ..... that is......with visible buildings with visible equipment that is used to indirectly make visible the inner secrets of atoms, molecules, and forces of Nature. This includes the Margaret Mead nuclear family..... atomic social forces of Nature.
Using super-symmetry physics / parallel processing / thought mirrors ......we can indirectly see the intellectual battles to control the philosophical souls of these scientists.
Nature has used mathematical mappings (math tranformations/correspondence functions) to MAP from the the internal subliminal mind levels (scientists and administrators) at Argonne National Labs
an external, more directly visible format at a remote, distant geography location on EARTH LAB known as Argentina. At this distant location, newspaper and magazine reporters gather the data and get it printed in mass media information vehicles......cellulose print newspapers/magazines.
The TOP SECRET cellulose news reports are published and distributed to millions of North American humanoid optical bio-computer processors for analysis ..... including the atomic humanoids in Argonne, Batavia, and Chicago's Hyde Park area in Illinois.
Think of the news ...as puzzle with several levels of messages. After all ...mass communications is really atomic mass communications VIA atomic humaoid activities that is reported VIA atomic mass communication devices as television, radio, and print.
Let's look at an example of the alter ego of Argonne ...which is Argentina, South America ......and the Per = Periodic atomic table of the Per =Peron political heritage ..... examples of modern Margaret Mead nuclear family....that is atomic anthropology. Let's look at bio-physics adult female agent named: Evita Peron or
in atomic electron humaoid terms: eV Peron
Thus we have the energy expression via AGENT eV = eVita Peron.
Peron = Per + eron -->
Per = Periodic table AND
eron = electron ....expression of life VIA eVita...and electron stage artist.
Eva Perón
Spiritual Leader of the Nation of Argentina
In office
since 7 May 1952
First Lady of Argentina
In office
4 June 1946 – 26 July 1952
President Juan Perón
Preceded by Conrada Victoria Torni de Farrell
Succeeded by Mercedes Villada Achával de Lonardi
President of the Eva Perón Foundation
In office
1948 - 1952
Succeeded by Juan Perón
Minister of Labour and Social Welfare
Preceded by Juan Perón
María Eva Duarte de Perón (7 May 1919 – 26 July 1952) was the second wife of Argentine President Juan Perón (1895–1974) and served as the First Lady of Argentina from 1946 until her death in 1952. She is usually referred to as Eva Perón (Spanish: [ˈeβa peˈɾon]), or by the affectionate Spanish language diminutive Evita.
She was born in the village of Los Toldos in The Pampas, rural Argentina in 1919, the youngest of five children. In 1934, at the age of 15, she went to the nation's capital of Buenos Aires, where she (she is a subsets of electron shells) pursued a career as a stage, radio, and film actress for Nature's atomic, bio-physics expression experiments.
Since Argentina is a Spanish language country ..atomic computers and COMPUTER EARTH can use spanned data record FORMAT to connect to the Argonne bio-computer brain databases ...via interface country PERU = Per + U = Periodic atomic table Uranium 235 and 238 and the interface country .... Fermi-Dirac probability agent for PERU = Pier ODD imply ONE.
The news about Chiocago area professorial brain electron orbital wars......
The atomic brain electron
ORBITAL ...word ...... algebra subset atomic word
The news article words
" eight former energy secretaries" translated is .......
eight oxygen atom electron energy secret
Argentina's top oil .....refers to the Hierarchy Problem of bio-physics agents
Argonne top.....brain orbital CIRCUS thoughts ...code...
Ringling Brother's Barnum and Bailey Circus .....advanced atomic English messages of the 26 proton alphabet of the ferrous oxide atom. Virginia TECH English class and FermILAB ought upgrade their iron Hemoglobin proteins with the proper atomic English language social science thought alphabets.
Hollywood creative writing class is becoming outdated.....especially the incomplete explanations of Virginia TECH tragedy regarding Mr.Cho age 23 ... the Darwinian selected agent for the 23 chromosomes social science language war.
Virginia Tech thinks that their Hollywood genetic WORD/Picture command alliance with GENE Hackmen movies .....will give them social control over inferior thinkers like myself.... who ..at the request of the English departments of my high school and extened colleges days from 1958 thru 1972 ........required us to become familiar with the concepts of George Orwell and Aldous Huxley. So...why are year 2012 English departments refusing to discuss these yerar 1960 perceptions and there possible recent manifestation signals ...thru tragic events like the OCEANIA classroom shooting at Cole Hall.
The reference to the physical geography county Argentina and its events ...... reflect the original events occuring in Argonne National Labs ......thus we see the supersymmetry / mirror ....... the structure of Nature's information systems architecture and the brain system feedback signal.
The symbols "18% Wednesday" translates to "18 We" --> 18 = H2O = small city of Watertown molecule , West Road and its atomic/astrophysics continuum model
PARALLEL to West Road, DAMTP, Cambridge, England
AND the WestRoad shopping mall tragedy messages of OM = Oxygen Molecules of OM= Omaha.
Thus we have an interwined string theory message ...with several empirical data components ..as may be expected by the EARTH LAB model of string theory
.....called ROPE theory of eu.ROPE
.....that HUNG in HUNG.ary in 1956 math logic uprising against
the biochemistry Iron Curtain ...the HEME group Fe(ii) ion .... with the British mathematical logic gallows message from year 1910 to year 1956 .....Russell/Whitehead book " to 56 Principia Mathematica".
What is 56?
The atomic mass of iron at Fer = Ferrous oxide atomic brain location at Fe = FermiLab. FermiLAB humanoids are participants in the B.F. Skinner box study that was started in 1946 and has evolved into new dimensions of expression.
Modern B.F. Skinner experimental test specimens are
B= Batavia, F= FermiLAB bio-physics SKIN containers labeled atomic humanoids.
These Skinner packages are educated and study some very difficult and abstract subjects. In turn , we see levels of experimention.
Fermilab scientists study math and atoms , etc.
Not to be outdone, Nature studies FermiLAB scientists.
By examining the iron Hemoglobin protein thoughts and their Fe(ii) ion symbolic activities..... Nature's intellect picks up a good idea of what's happening ......and is concerned about the symbolic DISEASEs in the state of the ILL/sick noise (State of Illi nois).
Hence the news ...
Takeover Fe .....
Takeover FermiLAB ... a battle for control of the SOUL of the iron Hemoglobin proteins and their professorial thoughts.
Thus we have geography region of northern Illinois ... base 16 hexadecimal region with Hex'A' = 10 = Argonne thru Hex'F' = 15 = FermiLAB. Thus the additional supersymmetry mathematical-physics WAR signal with the U.S.Army parallel WAR in Base 16 hexadecimal region of .......A to F -->A F ghanistan.
Thus we have an interesting puzzle that ought be understood.
The QUANTUM STATE of California and its universities defeated in the atomic human brain WAR - - -> Occupy WALL Street - -> ELECTRON thought circuit Hierarchy Problem explained by physics and chemistry professors
Permalink 12/06/13 19:59, by HerbZinser, Categories: Uncategorized
The periodic atomic table and mathematical components in WORLD economic battles.
The periodic atomic table has many formats of expressions. One such FORMAT are atomic humans with an atomic brain symbolic computer that expresses messages on behalf of the atomic table of life and thought and feelings. Humans are thus considered as representatives or messengers or business LAB partners working with the atomic/astrophysics continuum, its life, and the thought formats within it.
The human is composed of atoms.
Humans have thoughts.
Thoughts must have an origin.
Therefore atoms are the origin of thought.
Therefore some political protests, crimes, shootings, wars, etc. by humans.....are really atomic protests of the Margaret Mead atomic families VIA the human vehicle/ the human atomic feelings expressor/ the human atomic messenger.
Atomic social anthropology families are listed in beginning college physics and chemistry textbooks. The families comprise vertical columns in the periodic atomic table of life and thought. Thus we have the atomic family ... social anthropology shootings at EARTH LAB geography sites. Let's look some at some of the periodic atomic table of elements of life and thought ...... and let's identify some atomic elements and their messages to atomic humans at EARTH LAB geography locations. There are 18 vertical columns in the periodic atomic table of intellectual life.
The periodic atomic government representatives for the dinner table salt molecule are the Clinton's from Arkansas. Margaret Mead atomic anthropology and the salt molecule treaty with atomic humanoid structures is best expressed by the social science ...social chemistry formula signal.
NaCl = sodium chloride
NaCl = North america Clintons
NaCl = North america Clinton, President in 1994
NaCl = North america Clinton, Secretary of State
NaCl = salt molcule and the state of ARK.ansas .....
......state of sound/audio/phonetics: ARK can salt such as the
atomic brand name:
Morton Salt --> Mort = Mo + ort = Molecular orbit
This blog will consider the Margaret Mead atomic social economic messages AND the atomic human brain which is comprised of electron thoughts. Nature sometimes takes these internal atomic brain electron conflicts AND transforms them into external human expressions...such as protests,etc.
Let's look at the STATE of MIND of California ...and the intellectual problems of their party universities and their bragging universities .....that neglect simple polite, diplomatic communications regarding SCIENCE WAR issues that need to be clarified .....and that require some professors and graduate student without
pre-programmed biased bio-computer brain subroutines.
Let's look at the newspaper reports from California .......regarding the status of the Intellectual War ..... as citizens continue their attack against the periodic atomic table government and its various expressions .... including social engineering and brain development projects of NATURE.
Above, signal for Margaret Mead nuclear family ..
atomic anthropology agent ...known as the American apple PIE extension = PIER.
The math life signal equation is for the Chicago region which includes FermiLAB, Batavia, Illinois.
Above key words:
Pie = Pi + e = 3.14159 + 2.718 natural number
Pi implies a circle --> Chicago Cirlce Campus and its circle of convergence of complex variables ....and the President Eisenhower military industrial complex ...comprised of the military symbolic life of complex variable math functions ...via humanoid representatives.
Zuccotti Park --> signal for Hyde Park, Chicago in the ROC = Region of Convergence of signal processing
Zuccotti --> Z + uc = atomic number University of Chicago ....... messages waiting for the ELITE intellectual university and their biased, distorted perceptions of social sciences, world affairs, and monetary policy tricks.
Thanksgiving --> Thanks for giving me Pill.grimms
.......................Thanks for giving me Pills and the Grimm Reaper
Thanks for giving pills/ drugs/ and diseases
Thanks for teaching elementary grammer school children that this is an acceptable citizen /government health policy.....the George Orwell warning about social psychology manipulations and social propaganda ......allowed by lazy brain citizens. The California Department of education and school boards in California approve of all these brain computer program instructions for children symbolic brain processors.
FermilAB and the Office of Science refuse to help clarify the problems of atomic brain bio-computer thought configurations.
Above, we see the 500 year GUT project of Nature ...
that started in year 1453 with Grand Unfied Theory special agent: GUT + Tensor space + verb Tense --> giving Gutenberg and the printing press. This allowed Nature to develop the human optical nerve computer and symbolic life of nouns, verbs, math euations, etc.
The above reference to the North Pole magnetic data field ...concerns the field interaction with the humanoid iron Hemoglobin thought proteins at Fer = Ferrous oxide atomic bio-physics RD location for Fer = FermiLAB.
Now, University of Chicago and FermiLAB cannot inFER simple concepts ...because of the POLE PLOT brain education scheme as outlined by Nature and the 1976 tragic events in Cambodia ...that symbolized Europe and American classroom errors in arrogant assumptions.
1. Pol Pot, leader of the Khmer Rouge, was achieving his dream of Year Zero, the return of Cambodia to a peasant economy in which there would be no class.
Cambodia EXTERNAL tragedy is the parallel /supersymmetry to the INTERNAL symbolic brain symbolic insect problems at Cambridge University ......their internal gland biology map is the external map of En.gland.
Thus we see Herbert Spencer's year 1872 theory of the
INTERNAL <-- mirrror --> EXTERNAL
Nature's superysmmetry physics message about the atomic brain WAR
- Columbine HIgh School --> Colu = Columns of the atomic families table and the library textbook war with color tricks/schemes of publishers and school boards. Schemes of Color optics symbolized by Colorado...STATE of MIND. Universities and school systems ought close until they fix their bull-story problem and ther incomplete explanation about the shooting at Columbine.
- Virginia TECH violations of atomic English language AND the Department of En --> Energy ....insults and disrespect for the atomic English language WORD of HONOR.
- the Cole Hall wave mechanics shooting in the oceanography classroom ...... with the EARTH geo-physics LAND of Landau physics in DeKalb Illinois. In Computer Earth system 370 words ...LAND = Local Area Network Data.
Above LUKE Gates
--> Lu + Ke + Gates = Logical Unit (Logic) Gates
--> Gregory Porter --> G reg Port --> General registers I/O port
Above words of atomic English language ....
forces fighting with protesters --> gives
physics forces fight pro-testers --> atomic proton testers and their ERRORs>
The ERROTS are in the Margaret Mead atomic social anthropology policy of the Department of Energy and their deliberate allowance of citizen/family nuclear policy violations.
Attorney Theo --> Theoretical bio-physics laws and atomic legal systems
Above atomic English language references :
Pier --> Fermi-Dirac statistics agent from Peru..who evolved into
Pier --> Fermi-Direc format in the Navy Pier region of Lake Michigan wave machanics.
Nature's Navy Pier wave mechanics now has an extension to Batavia and oceanography class in DeKALB.
The U. S. Naval Research into the U.S.S Cole tragedy of OCTOBER 2000 was repeated at Cole Hall in a different format........... the Paris Island version (with Paris Hilton tricks) Marines song HALLS of Montezuma's (revenge).
Super-symmetry physics describes secrets of
PARIS = Par + Is = Parallel Information systems
Words: Three weeks, 33 arrested --> 3.33 = 3 1/3 math war signal regarding the 3330, 3350, 3380 series of disk storage ...used by public storage units of Earth real estate buildings.
1. htmlCached - Similar
Results 1 - 30 of 4331461 – Find your Public Storage
IBM 3330 data storage
Words: Three weeks, 33 arrested --> 333
ibm storage_3330.htmlCached
The IBM 3330 was a high-performance, high-capacity direct access storage subsystem for use with all IBM System/370 models
1. main storage, input/output channels, and the operator control and ... 370 Principles of Operation, GA22-7000, andIBM System...... The first three are self-explanatory. Control ... For direct access storage devices such as an IBM 3330Disk
lake city ustor.com
390 × 293 - Our inside and outside covered and secure storage units are just what you
Mayor Jean Quan --> Quantum physics signal about the University of California and their Hollywood intellectual approach to atomic social science, atomic social psychology, and atomic economic problems...........which are centered on the atomic brain electron and its symbolic repsonsiblities regarding SCIENCE WAR communications about serious matters.
Above words
lowest = 10 + west --> 10th month of October + west coast of United States of North America
lowest = Binary 10 equal Decimal 2 or Base 2
10west-energy orbitals --> lowest brain effort = social orbitals
occupy --> brain electron occpuy .....transformed to external, visbile human behavior expression
continuing in the order --> electron orders to California puppet humaniiods as predicted by puppet string theory of the California physics puppets
some "crossover" --> some intellectual double-cross by citizens
1. Occupy Wall Street - Wikipedia, the free encyclopedia
Occupy Wall Street is a protest that began on September 17, 2011 in Zuccotti Park, located in New York City's Wall Street financial district. The protest was ...
Below, we see the Hierarchy Problem of brain electrons at the Department of Energy ...who conveniently ignore important empircal data and ignore the Margaret Mead nuclear family SCIENCE WAR casualty problems.
Of course, they take orders from Washington, DC.
The above electron shells USE a new Virgin TEchnology ...they transform their Margaret Mead nuclear electron thought to external format....which is bullet shells comprised of messenger electron shells. Virgin TECH was tested at Virginia TECH with the atomic English language Department chosen one --> Darwinian selection of agent Mr.CHO..
1. Occupy Wall Street | NYC Protest for World Revolution
1 day ago – News and resources for protesters attending the mass demonstration on Wall Street against financial greed and corruption.
Words....... the mass demonstration on Wall Street
................e mass demo ...brain cell WALL
...........electron mass problems ...cell WALL symbolic nonsense
The above references are from book: Chemistry by atomic humanoids with their secret proper noun labels:
Mur --> atoM uranium agent for Margaret Mead
Castellion --> Vsam data set Ca =Control area
Ballantine --> Bal + lan --> Basic assembler language ..... local area network
In addition to basic atomic brain WAR output messages ..
OCCUPY WALL Steet.......cover the problems with
OC = OptiC nerve bio-computer problems + brain cell WALL communications problems.
.....especially with universities and agencies that ought be concerned with brain quality control ...that is the input symbolic and audio data that is input to the human bio-computer. Television and movies are major sources of input data....that contain many errors.....some are deliberate optical and audio errors used
to attack the
Central Nervous System 370 abstract brain symbol computer.
Earth mathematical structure news reports of the WORLD mathematical-physics evolution into different dimensions. Base 2 exponents and the Tale of 2 Cities.
Permalink 12/06/13 13:41, by HerbZinser, Categories: Uncategorized
Human mathematical events on the geography surface of EARTH
We have a vast amount of Base 2 binary historical information on the structure of Earthly existence. The precise Nature of this is being reviewed, but research suffers from the lack of resources. What are some historical clues?
--> Galileo the DEFENDER (of EARTH) made the 1st major announcement with his book ....
"Two Chief Worlds" in year 1632 ...and the world-wide message was repeated in year 2001 with
"Two Chief World Trade Centers" tragic astronomy battle in Manhattan .....providing information for the astronomy CHIEF prosecutor's to study.
Does astronomy wisdom exist on EARTH?
Biased,distorted astronomy exists and this bias is encouraged by hypnotized universities and astrophysics research centers.
Anyhow, Nature's intellect continues to evolve to new levels of complexity ....regardless of the limited views of professors and graduate students who take BRAVE NEW WORLD orders and are atomic human puppets as predicted by string theory physics. It is easy to escape these social psychology traps .....help understand the Science Wars.
--> The book " Tale of 2 Cities'
--> World War 2 axis powers ........ 2 axis powers in math terms is:
.....geometry axis is a high school algebra class graph of an equation
.....powers refers to exponents
.....2 refers to the exponent 2 ...giving the more TRUE NATURE of the war over the quadratoic equation and its graph of a parabola.
What is the current status of Nature's math projects on EARTH. Let's look at the University of Wisconsin, Madison math and computer science Base 2 exponent experiments on EARTH LAB. Base 2 and Base 16 news ...
Ther above math headline ...
...World Dairy Expo .......... with algebra subset letters
...Wo ......Da ..y Expo ......then word ...expands trade show gives
..TWo.....Data y exponents ...............
Thus we have multiple signals from this cryptic message:
a) Base 2 binary bio-computer brain DATA
b) y exponents ....which may be
- the right triangle equation
- a differential equation with exponent 2
- and other applications yet to be discovered
c) in the context of WORLD geography ...we must consider the possible math tragedy in Chicago at the E2 nightclub ..... a dance nightclub on the 2nd floor of a building on 2347 S. Michigan Avenue.
Consider the following supersymmetry physics/ parallel mirror of existence of the E2 concrete/wood building as a physical structure.
The 1st floor...the BASE had the Epitome restaurant....
the 2nd floor was like the exponent level ...hence the name E2.
Thus we consider the math geography surafce of EARTH ...with levels ..represented by building levels translated in symbolism. Consider the outline below.
We see the supersymmtery MIRROR in a symbolic building ...that mirrors the physical building at 2347 S.Nichigan Avenue, Chicago with IIT and Leon Lederman atomic physics nearby.
Thus we have a symbolic building above. The E2 nightclub is the exponent of x of the 1st math component ax.
Now inside the human brain of a high school student ....before algebra ...his/her ..... brain synapses/ neurons /axons .....have bio-math primitive symbol ax in the biology molecule ax.
Thru basic math education ..their brain logic circuits ...go from ax to ax + bx = c = o.
Then comes algebra and the parabola, the quadratic equation, and the graph of the parabola of a piece of paper.
Thus thru the University of Chicago quad .......incomplete quadratic equation test ....we understand the E2 nightclub brain deficiency and the cover-up of thge math tragedy by the Chicago universities.
But they lie about the math war of World War 2
They lie about President NixoN
and the N x N square matrices war in Vietnam ..
.......................................rice is a subset of word matrices
.......................................rice fields ...thus become DATA FIELDs ..
.................................matrices data fields of Einstein's data processing DATA FIELD theory of Computer Earth system 370 geography LAND at ....
.................LAN Dat(a) --> Local Area Network Data.
Universities and their friends are not accredited instutions from the TREE of KNowledge point of view.
So what professors will show some leadership and help clarify these problems with me.
What other news supports the math curiousity ?
--> Above we see CP's entomologists ...reference to the 1959 Margaret Mead atomic social anthropology
lecture by C.P. Snow titled "TWO Cultures" .
This is verified by the above words:
Corn rootworm
black cutworm
--> Also, Earth crop editor .... agent Jane F.... perhaps known in the atomic/astrophysics continuum as a member of Jane's Fighting Ships ........ writes a very complex signal ....
" souhern two-thirds of the state" ..
2/3 implies G = universal gravitational constant = 6.67 X 10 exponent -11 ....... in the southern two-thirds of the state of Wisconsin is Jane .....
and her symbolic gene connection to Gene Motors in Jane ...that is General Motors in Janesville...all connected by the elementary grammer schools......subliminal education of our primordial minds VIA book "DICK and Jane". Thus this common communications foundation ....... gives the Jane communications LINKS.
Continue with gravity .......the gravity grammer network ..... Jane has got the connections to more ..........
of course....
we begin to see the subtle communications from Wi= Wisconsin editor Jane F (y) .....bio-math agent Function (y) .....and the astrophysics signal regarding Jane Wilde (Wi = Wisconsin) Hawking ..... former wife of gravity specialist Hawking ......the signal being about the GM ..Gravity/Magnetic field automotive assembly line in Jane F. economic RD terrority of Janesville, Wisconsin.
Thus we see that Jane Fyksen is alot smarter that it appears from the surface article she wrote .....but women .....like to reveal their secrets about Earth events and structures ...... so we see the James Joyce symbolic voyage of
information data streams of bio-computer human editors ...in this case we have Captain Jane (way) and the Star Trek: Voyager project and its many expressions ...VIEWS from Agri-view Earth undercover agents for Nature.
--> In addition, in EARTH geography amd modern symbolic EARTH literature ...we have signal
Corn rootworm .....
...............two reference to 2 CORN structures:
geography --> Tropic of Capricorn
Henry Miller book --> Tropic of Capricorn
Henry Miller ......an American author ....ate food and expressed Botnay messages
......ry ..mill
......rye mill flour used for making rye bread and feeding author Hen.rye Mill.er..
.......the Bread kept him alive and nourish and he expressed his gartitude by writing
secrets....Bread subset word
.................read about the Paris experience of Henry Miller and 11 dimsions of string theory
Paris = Par + Is = Parallel Information systems on Earth in the multi-facted dimensions of Sartre existentism
And she wrote more clues .....so much more in the remainder of the article.
Such as the Voyager ...... Seven of Nine signal.
Thus we think of BIG EAR radio telescope that was at Ohio Stae university.
The parallel BIG EAR ..is the living EARTH cell ...hence..
............the BIG EARTH and its geology/geography access to the symbolic EARTH
Microphone ......subset algebra word ............
....crop ....... ops.....what's is the secret of publlishing Earth Ag news ...
....Crop Connection to the State of communication
................... connect to Connecticut.
So, there is much to learn about the secret world of agriculture and message processing systems.
CONTACT: Principle science researcher HERB ZINSER
Mail Address: P.O.BOX 134, Watertown, WI 53094-0134, USA
E-Mail: Herb@Zinoproject.com
Max PLanck TIME signal 10:43 to atomic agent Africa ONE --> the ODD ONE and FermiLAB An-26 near atomic EARTH .... Wisconsin communication highways An-26 AND I-43
Permalink 12/05/13 22:11, by HerbZinser, Categories: Uncategorized
The signaling EVENT
--> The 2007 Africa One Antonov An-26 crash occurred....
The flight left N'djili
a10:43 local time bound
A study of the Galapagos Islands REGION and its evolutionary extensions of PERU and COLUMBIA provides an interesting background to evolution signals from around the WORLD. The relationship may evolve the Earth's geology iron core AND the Earth geography surface LAB at the ferrous oxide atom / magnetic field INTERACTION location at FermiLAB, Batavia, Illinois.
What is Max Planck TIME?
...... approximately 10−43 seconds (Planck time)
..... approximately 10:43 see conditions (Plan k time)
The 2007 Africa One Antonov An-26 crash occurred....The flight left N'djili
at 10:43 local time bound for Tshikapa
Projects: Holometer - Fermilab Center for Particle Astrophysics
If there is a minimum interval of time, or a maximum frequency in nature, ... About a hundred years ago, the German physicist Max Planck introduced the ...
Interstate 43 marker
the absolute natural bound on
information transmission,
about 10^43 bits per second.
Let's look at more information and signals ...the may be LINKED together to form a larger picture.......
that may be useful for GUT and TOE.
GUT = Grand Unified Theory project that started in year 1453 with the GUT printing press of Gutenburg in the geo-physics geography REGION of string theory development ...known as eu.ROPE.
TOE = Time Order Entry for the TOE = Theory of Everything ...which includes YOU!
Signals about Max Planck TIME with Hawking and others.
TIME Highway .....
Interstate 43 marker
Interstate 43
On October 10, 2002, a multiple-vehicle collision occurred on I-43, just south of Cedar Grove.
The accident occurred on southbound I-43 in Sheboygan County. It involved 50 vehicles and was found to have been caused by low visibility due to fog at a point where the freeway comes its closest to paralleling Lake Michigan, The accident and resulting fires led to the deaths of 10 individuals.
CLUES .........
She count --> shell count physics
Paralleling --> supersymmetry physics / parallels /mirrors
deaths of 10 ..... decimal base 10 with exponent -43 ........ represented by Interstate I-43
What does the HAWK say?
Below, the BLUE line from Milwaukee to the state line (dashed line below Janesville) is the EARTH LAB ..... Max Planck communications TIME HIGHWAY known as Interstate I-43.
Location A is Northern Illinois Univerity, DeKalb, Illinois where the wave mechanics TIME battle took place at Cole Hall.
Between Location A and Wheaton/Chicago is the Margaret Mead atomic anthropology RD cnter known as FermiLAB.
Below, we see the COLEMAN signal -> Cole Hall TIME BATTLE at Northern Illinois University, DeKalb ..near the Wisconsin / Illinois border and information TIME higway I-43 ....involving Nature's atomic social engineering project plan ...the Max Plan with Max Plan..ck.
Northern Ill/sick noise University , DeKalb --> symbols
some type ...
either an atomic NODE
and /or a
harm wave NODE of harmonic waves (sound, audio waves ) of
harmonic waves ...that the the BIG EAR ....of EAR + TH.eory = EARTH.
Thus we see why the tragic event took place at the EARTH geography NODE
location ...of some type of wave.
Let's look at the MAJOR SIGNAL that we are interested in .... for this BLOG science paper.
2007 Africa One Antonov An-26 crash
The 2007 Africa One Antonov An-26 crash occurred when a twin engine Antonov An-26, belonging to the Congolese air carrierAfrica One, crashed and burned shortly after takeoff from N'djili Airport in KinshasaDemocratic Republic of the Congo on October 4, 2007. The flight left N'djili at 10:43 local time bound for Tshikapa, a distance of 650 km to the east.
at___ 10:43 local time
atom 10:43 local time
at las 10:43 local time
2007 Africa One Antonov An-26 crash
at las 10:43 local time
Wisconsin I-43 Exits and Max Planck TIME exits
Above, we see the EARTH wave mechanics region expressed by Lake Michigan
I-43 Time exits map
Regional Map: Milwaukee can be approached via Interstate 94 from
the South and West and Interstate 43 from the Southwest and North.
- (Map courtesy of Google Earth). The southbound
I-43 connector ramp to ..
10−43 seconds (Planck time)
H.G.Wells and the TIME MACHINE (the TIME COMPUTER system 370)
Wisconsin I-43 Exits southbound I-43 connector ramp to ..
Computer science ---> EVENT connector ---> ram.page at SIKH temple Oak Creek, Wisconsin.
Mathematical-physics math sin functions of the State of Wisconsin.
2007 Africa One Antonov An-26 crash
[hide] --->
1 Background
HowStuffWorks "YukawaHideki"
Hideki Yukawa. Yukawa, Hideki (1907-1981) was a Japanese theoretical physicist who won the 1949 Nobel Prize in physics for a theory he published in 1935.
yet later Reuters reported that an on-board mechanic survived, while Associated Press claims a flight attendant also survived, bringing the total number of survivors to two.
[edit] Mechanic's account
Wave mechanics - Wikipedia, the free encyclopedia
Wave mechanics may refer to: the mechanics of waves; the wave equation in Quantum Physics, see Schrödinger equation...
wave mechanics - The Free Dictionary
n. (used with a sing. or pl. verb). A theory that ascribes characteristics of waves to subatomic particles and attempts to interpret physical phenomena on this basis ...
Quantum mechanics - Wikipedia, the free encyclopedia
en.wikipedia.org/wiki/Quantum_mechanicsCached - Similar
Quantum mechanics (QM – also known as quantum physics, or quantum theory) is a branch of physics dealing with physical phenomena at microscopic scales, where the action is on the order of the Planck constant...
According to Planck, each energy element E is proportional to its frequency ν:
E = h \nu\
Planck is considered the father of the Quantum Theory
where h is Planck's constant. Planck (cautiously) insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself
2007 Africa One Antonov An-26 crash
AN-26 --> Alphabets and Numbers ...see DICK and Jane elementary physics schoolbook for elementary grammer school......in JANESVILLE, Wisconsin with Highway 26 symbolic of the English alphabet letters and their communications higwhays thru-out the WORLD via the British Empire and
Queen Victoria.........
Que + en + Vict + oria = orbital physics ..atomic English language ......
An-26 proton alphabet of ferrous oxide atom has a serious problem with Fer = Ferrous oxide hemoglobin protein LIFE FORMS at FermiLAB...regarding their Margaret Mead atomic social anthropology expressions and polices ... of the Department of Energy.
AN - 26 ---> Highway 26 and LINK to base 16 hexadecimal Highway 16 in Watertown, Wisconsin.
State Trunk Highway 26 (often called Highway 26, STH 26 or WIS 26) is a state highway in the U.S. state of Wisconsin.The route is generally two-lane with the exception of a few urban multi-lane arterials. WIS 26 provides direct access from Janesville and Oshkosh to Waupun, Watertown and Fort Atkinson.
Below...the physics coulumbs equation for the iron atom ( components 1.6 exponent 19 or 16 exponent 18) ...displayed on the surface of EARTH. EARTH with it's iron core expresses messages on the EARTH geography surface....which is like NATURE's natural television display screen.
Highway 26 ..... above North and below the South end
.The 2007 Africa One Antonov An-26 crash occurred....
The flight left N'djili
at 10:43 local time bound for Tshikapa. The flight was a commercial cargo flight carrying at least
28, including a flight crew of five.
The flight manifest stated that there were 16 passengers aboard, but more boarded the flight shortly before takeoff.
Thus we see the EARTH communication correleation signals .....between Africa TIME code 10:43 (Zula TIME LINK).
Now...we see EARTH government MAP signals with President Clinton (near I-43 sign) and the famous Whitewater,Wisconsin travel agency debate of Clinton, Wisconsin.
An-26 ---> Highway 26 Milton --> Mil + ton = earth MILITARY forces provided the the TON(s) of Earth iron core, magnetic field force interaction with human IRON Hemoglobin proteins and the GRAVITY field with TON(s) of EARTH atomic mas = weight.
President William Jefferson Clinton ...secret code
...............Wi .......J ............CL = Wisconisn JCL = Job Control Language for Computer Earth .....
this William Jefferson Clinton message near Hubble(ton), Wisconsin special SECRET astronomy projects....
Thus the CLINTON signal ...CLI + Ton -->
...CLI = Clifford Lane airplane crash with TONS of weight .....withing several miles of Highway 16 and Highway 26 and about 60 miles away from I-43.
Thus within the geography region ...of a radius of about 60 miles are many of the parameters/adjectives described by the 2007 Africa One crash.
Thus a bidirectional mapping exist between the tragic cargo airplane crash EVENT in Africa ......and the tragic cargo airplane crash EVENT in Wisconsin ...... LINKED by 43 and Einstein's data processing DATA FIELD theory.
Thus applied math .....geo-math and geo-physics applied to the data fields of the EARTH geography land surface ....with data fields-->
air fields, farm fields, and football fields ..... with Base 16 hexadecimal HIGHWAY 16, Watertown as some reference point for Nature's systems.
2007 Africa One Antonov An-26 crash
Interstate 43 marker
at 10:43 local time bound for Tshikapa, a distance of 650 km to the east.[1][2]
twin engine Antonov An-26, belonging to the Congolese air carrier
1 Background
2 Crash
2.1 Mechanic's account
3 Aftermath
4 See also
5 References
[edit] Background
Mirrors / super-symmetry / double-helix images
How to Determine the Charge of an Atom | eHow.com
An atom of iron, for example, contains 26 protons and 26 electrons. ... Oxidation numbers are used to determine the oxidized and reduced chemicals in a ...
ibchem.com/IB/ibnotes/full/ato_htm/2.1.htmCached - Similar
2.1.3: Define the terms mass number (A), atomic number (Z) and isotope of an element. ... Represents an iron atom with a mass of 56 units and 26 protons. ... often, to uranium oxide pellets for its final destination in the nuclear power industry.
-->twin engine Antonov An-26, belonging to the Congolese air carrier
--->twin engine (double-helix) ... atomic Anthropology IRON LADY + IRON MAN project .... An-26,
belonging to the Congress mouth/LUNG air carrier
2007 Africa One Antonov An-26 ---->
route message to
2007 ODD ONE at An-26 RD site
AGENT identifier 43
Peru = Per + U = Periodic atomic table
u = uranium nuclear anthropology
Lima Region
Lima Province
Districts 43 districts
Amino Acid and Codon Table
The DNA codons representing each amino acid are also listed. All 64 possible 3-letter combinations of the DNA coding units T, C, A and G are used either to ...
About; DNA, genes & genomes; Your Human Genome; Genome, health & society .... letters (A, G, C and U), there are 4 x 4 x 4 = 64 different codon combinations.
Thus we see... Peru agent 43 = 4 exponent 3 = 64 ....above DNA equation of codon combinations.
Also, 64 is a bio-computer doubleword.... suggesting PERU agent 43 may be an atomic double-sgent for the Margaret Mead nuclear family...atomic social sciences.
We known that Peru agent 43 ....associated with ..
Districts 43 districts
went to MIT --> Nature's MITOCHONDRIA education center in Cambridge ..in the Quantum State of Mass ......
Thus an outline of the mystery map of Max Planck and Max Plan ..project plan 43?
A summary LIST of some Science War battles and the usage of human agents
Permalink 12/05/13 01:37, by HerbZinser, Categories: Uncategorized
Science war evolution --> Neutron Neuroscience wars --> math integer Internet wars --> biology Blog wars
Today, the world is really a symbolic world.....with physical entities as a secondary structure.
Lets look at Darwin evolution. For millions of years Nature was concerned with physical shape and sizes of various animals, birds, humans, etc ; but, in the last 100 years that aspect has been sort of discontinued.....and Nature has concentrated on human brain symbolic evolution.
Now we think of the symbolic world...as a separate entity ..that we interact with. In this interaction process with math equations and atomic English language...we make additions, modifications,etc. Thus a bi-directional process with Nature's intellect.
In the continuum we exist as intermediates....the human intermediate.
The continuum is perceived in 2 parts: physical and symbolic.
Thus we have a brief continuum outline with:
atomic --> molecular cell biology -->
....................social chemistry humans <--
...............................astronomy <--astrophysics
with the above structures embedded in mathematical-physics space /time on EARTH LAB with the North Pole magnetic field LIFE interaction with human Hemoglobin iron proteins.
The modern atomic English alphabet is a Latin alphabet consisting of 26 letters – the same letters that are found in the ISO basic Latin alphabet:
Majuscule forms (also called uppercase or capital letters)
Thus as William Shakespeware stated around year 1600 ..
"The WORLD is a stage and we are the actors"
Examples of continuum CONFLICT messages with human thought ERRORS:
--> Nature's government of DNA nucleotides T,A,G, C
selected biochemistry textbook Major Hasan for the Fort Hood
T,A,G message for the penTAGon codon.
DNA has 64 codons......thus the DNA military group ...pen.TAG.on is one of 64.
Why are incomplete explanations given by humans about Nature's projects and DNA intellect?
Hence, Nature's expresssion of anger about the DNA role, message, and purpose...and the incorrect social policy messages by universities.
Thus the Darwinian selection of biochemistry MAJOR Hasan for Nature's SCIENCE WAR with the nonsense brain manipulations approved by Washington, D.C. and its biased version of atomic continuum social engineering projects.
--> wave mechanics physics messages by atomic element Na -->
11 protons of the sodium component of SALT ..... sodium atomic weight 23 ---> Salt treaty messages for the Navy and the U.S.S Cole tragedy of October 12,2000.
The wave mechanics physics message repeated for the NAVY at Cole Hall, Ocean Classroom in DeKalb, Illinois.
Salt is sodium chloride ..symbol NaCl....with atomic element Mr.Na as the atomic political science representative for the periodic atomic table of life (both physical and symbolic).
We see Nature's intellect express its atomic political science powers with the SALT molecule agent Mr.NaCl. What is the atomic brain .... electron thought ... election process?
In year 1980 we have the brain
electron political decision DEMO with
elect Ron Reagan President of brain electron circuits.
Universities have yet to acknowledge the existence of such brain electron activity and expression....since their thoughts are preoccupied with Hollywood movies, songs and dances, and college football.
Take basketball .....how to dribble ....subset word
......ask an intellectual question ....get a dribble answer.
In year 1992, Margaret Mead nuclear family politics is with agent Mr.NaCl.
Thus we have Salt molecule with inorganic chemistry
President NaCl = North america Clinton.
Universities and scientific societies refuse to help explain this process. They ain't talking with me.
--> Organic chemistry -->social chemistry battle at Virginia TECH with agent Mr.CHO for the organic chemistry government of Nature. The atomic English language agent chosen to represent the EARTH LAB chemistry government of Nature was Mr.CHO --> symbolic life of a textbook with CHO = CH symbol used in Organic chemistry --> CH = Carbon Hydrogen building block O= Organic chemistry.
The American Chemistry Society Organic Division and the Royal Society for Chemistry ....ought recognize existence of REALITY, the Science War casualties, and daily life. Other things exist besides their mouth ORGAN...an organic chemistry device with atomic English nouns/verbs..that has SENT a message ...that is ignored. This is NO trival matter in world affairs.
--> THe base 16 hexadecimal intellectual ERRORS by various groups...
Hence, the CAUSE --> EFFECT on brain judgement errors by agent Matt Anderson representing the Base 16 Hex"FFA" Wisconsin farm organization and their data fields on COMPUTER EARTH system 370 geography.
Also, the CAUSE --> EFFECT on airplane pilots of the Base 16 HEX'FAA' organization and their policy for data space on EARTH. Thus the Runway 26 message of the 26 letters of the English language symbolic life and ...a LEXICON message for Lexington, Kentucy coma people... an message represented by the COMA AIRLINES accident.
Since, Nature can influence human brain decisions..the pilots brains were flipped into accident mode for Nature's DEMO mess/message to Washington,DC.
The Base 16 hexadecimal agency HEX"FAA" and CITIZENS ought learn the basics of EARTH government space/time laws and their interaction with humans who trepass ..the boundaries of mathematical-physics. The 400 year Galileo astronomy math-physics WAR is still in progress.
--> The O = Oxygen molecule social adjustment project by the periodic atomic table government using Darwinian atomic selection of oxygen atomic computer TIME agent: TIM.othy McVeigh(t) --> eigh(t) = 8 oxygen electron LUNG processor. Thus this EARTH LAB human specimen has his atomic brain computer programmed ....with Margaret Mead atomic anthropology behaviorial instructions.
Then the oxygen INTELLECT...having chosen a human specimen for the atomic anthropology war....had to design a OXYGEN mission. To optimize the SIGNAL .....for molecular cell biology professors that have a Signal Recognition Particle (SRP)......oxygen decided to use its atomic features as a DISPLAY of oxygen message processing systems.
Thus we have atomic number 16 OXYGEN with 8 electrons .....thus the design parameter numbers for LIFE and Death of 16 8 at the Federal Building at O= Oxygen breathing location in O = Oklahoma City.
Thus the 16 oxygen 8 EVENT SIGNAL of 168 dead.......
oxygen states that
those SAMPLE SPACE humans were brain dead, physically alive before the EVENT....
and after the tragedy ...they were brain dead, physically dead .
From Nature's view of EARTH LAB......UNCLE SAM implies SAMPLE Space with the year 1865 Lewis Carroll description of guinea-pigs. So this process has been known for over 120 years
:: Next >>
©2021 by HerbZinser
Contact | Help | Blog theme by Asevo | multiple blogs | webhosting |
bcda02e14c675fda | , , , , , , , , ,
Photo by Flo Westbrook from Pexels
The foundation stone of quantum mechanics doesn’t just describe the behavior of infinitesimal subatomic particles – it also governs the movement of the largest and most massive objects in the Universe, says a prominent astrophysicist.
Planetary scientist Konstantin Batygin was exploring the concept of astrophysical disks – sometimes called accretion disks; massive self-gravitating swirls of matter which form seemingly everywhere. Planets orbit stars forming solar systems, which in turn orbit super-massive black holes at galactic centers…
While these disks may start off with a circular shape, over epic stretches of time they can ripple and warp, exhibiting vast distortions that still can’t be definitively explained by astrophysicists. While investigating an area of quantum physics called perturbation theory to see how it could mathematically represent the forces in astrophysical disk evolution, explaining how these vast objects warp over aeons, Batygin discovered something remarkable.
In the theory, an astrophysical disk can be modeled as a series of concentric wires that slowly exchange orbital angular momentum among one another. “When we do this with all the material in a disk, we can get more and more meticulous, representing the disk as an ever-larger number of ever-thinner wires”, Batygin explains. “Eventually, you can approximate the number of wires in the disk to be infinite, which allows you to mathematically blur them together into a continuum. When I did this, astonishingly, the Schrödinger equation emerged in my calculations”. (1)
Who says atoms are something different than “macroscopic” elements of space? Who defines what is microscopic or macroscopic after all, except our subjective sense of relative size? All our science is based on seeing differences where there are none. And then trying to merge or reconcile these differences through an ‘elegant’ theory which can bring everything together…
A universe inside an atom.
A particle as big as a universe.
Consciousness inside nothingness.
Nothingness inside the mind of a wise man…
The less stones you through into the lake, the calmer its surface will be. And then and only then, will you be able to see the cause of everything in it. Reflected on the quiet surface, you see yourself. On a calm night, you smile.
And somewhere on the pristine surface a galaxy is born… |
0682a073c588c68c | Approaches to non-adiabatic calculations
Tags: ExcitedStates
The problem of modelling electronic transitions caused by interactions with ions is formidably difficult. Standard approaches to electronic structure assume that the electrons move on a single energy surface defined by static ions (often called the Born-Oppenheimer approximation, and discussed in Chapter 7 of the book). The most common implementations calculate the electronic structure given the ionic positions and then calculate forces on the ions from the electronic structure (it is possible to do this for excited states of the system as well as for the ground state, by promoting electrons into elevated levels, in a method known as delta-SCF[1]). When performing molecular dynamics (MD) this leads to the Born-Oppenheimer MD (BOMD) method, where the electronic ground state is always maintained, though at the cost of a ground state search at every step.
The Car-Parrinello MD approach starts from a relaxed electronic ground state, and evolves the electronic states in time at the same time as the ionic states, while assigning a fictitious mass to the electrons. The approach keeps the electrons close to the ground state surface while making the calculations more efficient, though it does require smaller time steps than standard MD methods. It is in common use in the CPMD code, but is much less common than BOMD.
The Ehrenfest method goes beyond either of these other approaches and evolves the electrons according to the time-dependent Schrödinger equation. The method is appealing when dealing with situations where electrons are strongly affected by ions, particularly in combination with accurate methods such as TDDFT. However, the energy surface which is being followed is now a dynamic surface which has evolved with the ions; one approach to analysing these calculations is to perform a static, ground state calculation for the system at one instant of time, and to project the dynamic state onto the ground state.
If you want to model the transfer of energy between the electrons and ions, this becomes significantly harder. There is no provision for the change of the occupancies of different electronic states in any of the theories mentioned above, so that the transfer of energy when an electron de-excites is rarely correctly handled. This is important: when an electron decays non-radiatively (i.e. without emitting a photon, but by putting its energy into the ionic motion) the resulting forces and velocities on the ions will determine the subsequent evolution of the system.
The most commonly used approach is called surface hopping: the system is allowed to evolve according to normal MD, but at each MD step there is a finite probability for an electron to hop between electronic surfaces (hence the name). However, the energy transferred out of the electronic system is placed randomly into the ionic motion (except in sophisticated implementations). Moreover, it is a stochastic method, and requires multiple trajectories, that is multiple MD simulations, to correctly sample configuration space.
What I find most interesting about all these approaches is that it is not possible to model resistive heating on an atomic level. We showed[2] that the correlations between ions and electrons are needed to do this, in much the same way that it would not be possible to model Brownian motion of large particles by using a fluid with a density rather than a fluid with individual particles. We wrote a review on solid state approaches to energy transfer between electrons and ions[3], and there are good overviews on surface hopping and related techniques from the chemistry literature (e.g. [4] and [5]).
I will write about one or two new ideas in this field in the next week or two.
[1] U. Terranova and D. R. Bowler, J. Chem. Theory Comput. 9, 3181 (2013) DOI:10.1021/ct400356k
[2] A. P. Horsfield, D. R. Bowler, A. J. Fisher, T. N. Todorov and M. J. Montgomery, J. Phys. Condens. Matter 16 3609 (2004) DOI:10.1088/0953-8984/16/21/010
[3] A. P. Horsfield, D. R. Bowler, H. Ness, C. G. Sanchez, T. N. Todorov and A. J. Fisher, Rep. Prog. Phys. 69 1195 (2006) DOI:10.1088/0034-4885/69/1195
[4] O. V. Prezhdo, W. R. Duncan and V. V. Prezhdo, Prog. Surf. Sci. 84, 30 (2009) DOI:10.1016/j.progsurf.2008.10.005
[5] T. van Voorhis, T. Kowalczyk, B. Kaduk, L.-P. Wang, C.-L. Cheng and Q. Wu, Annu. Rev. Phys. Chem. 16, 149 (2010) DOI:10.1146/annurev.physchem.012809.103324
This entry was posted in techniques on 2014/6/14. |
701bf6f4086276b9 | WATOC 2017
See you all in 2020!
Photosynthesis and Singlet Fission – #WATOC2017 PO1-296
If you work in the field of photovoltaics or polyacene photochemistry, then you are probably aware of the Singlet Fission (SF) phenomenon. SF can be broadly described as the process where an excited singlet state decays to a couple of degenerate coupled triplet states (via a multiexcitonic state) with roughly half the energy of the original singlet state, which in principle could be centered in two neighboring molecules; this generates two holes with a single photon, i.e. twice the current albeit at half the voltage (Fig 1).
Jablonski’s Diagram for SF
It could also be viewed as the inverse process to triplet-triplet annihilation. An important requirement for SF is that the two triplets to which the singlet decays must be coupled in a 1(TT) state, otherwise the process is spin-forbidden. Unfortunately (from a computational perspective) this also means that the 3(TT) and 5(TT) states are present and should be taken into account, and when it comes to chlorophyll derivatives the task quickly scales.
SF has been observed in polyacenes but so far the only photosynthetic pigments that have proven to exhibit SF are some carotene derivatives; so what about chlorophyll derivatives? For a -very- long time now, we have explored the possibility of finding a naturally-occurring, chlorophyll-based, photosynthetic system in which SF could be possible.
But first things first; The methodology: It was soon enough clear, from María Eugenia Sandoval’s MSc thesis, that TD-DFT wasn’t going to be enough to capture the whole description of the coupled states which give rise to SF. It was then that we started our collaboration with SF expert, Prof. David Casanova from the Basque Country University at Donostia, who suggested the use of Restricted Active Space – Spin Flip in order to account properly for the spin change during decay of the singlet excited state. A set of optimized bacteriochlorophyll-a molecules (BChl-a) were oriented ad-hoc so their Qy transition dipole moments were either parallel or perpendicular; the rate to which SF could be in principle present yielded that both molecules should be in a parallel Qy dipole moments configuration. When translated to a naturally-occurring system we sought in two systems: The Fenna-Matthews-Olson complex (FMO) containing 7 BChl-a molecules and a chlorosome from a mutant photosynthetic bacteria made up of 600 Bchl-d molecules (Fig 2). The FMO complex is a trimeric pigment-protein complex which lies between the antennae complex and the reaction center in green sulfur dependent photosynthetic bacteria such as P. aestuarii or C. tepidium, serving thus as a molecular wire in which is known that the excitonic transfer occurs with quantum coherence, i.e. virtually no energy loss which led us to believe SF could be an operating mechanism. So far it seems it is not present. However, for a crystallographic BChl-d dimer present in the chlorosome it could actually occur even when in competition with fluorescence.
FMO Complex. Trimer (left), monomer (center), pigments (right)
BChQRU chlorosome. 600 Bchl-d molecules
I will keep on blogging more -numerical and computational- details about these results and hopefully about its publication but for now I will wrap this post by giving credit where credit is due: This whole project has been tackled by our former lab member María Eugenia “Maru” Sandoval and Gustavo Mondragón. Finally, after much struggle, we are presenting our results at WATOC 2017 next week on Monday 28th at poster session 01 (PO1-296), so please stop by to say hi and comment on our work so we can improve it and bring it home!
A New Graduate Student – Medicinal #CompChem on HIV-1
Collaborations in Inorganic Chemistry
I began my path in computational chemistry while I still was an undergraduate student, working on my thesis under professor Cea at unam, synthesizing main group complexes with sulfur containing ligands. Quite a mouthful, I know. Therefore my first calculations dealt with obtaining Bond indexed for bidentate ligands bonded to tin, antimony and even arsenic; yes! I worked with arsenic once! Happily, I keep a tight bond (pun intended) with inorganic chemists and the recent two papers published with the group of Prof. Mónica Moya are proof of that.
In the first paper, cyclic metallaborates were formed with Ga and Al but when a cycle of a given size formed with one it didn’t with the other (fig 1), so I calculated the relative energies of both analogues while compensating for the change in the number of electrons with the following equation:
Fig 1
Under the same conditions 6-membered rings were formed with Ga but not with Al and 8-membered rings were obtained for Al but not for Ga. Differences in their covalent radii alone couldn’t account for this fact.
ΔE = E(MnBxOy) – nEM + nEM’ – E(M’nBxOy) Eq 1
A seamless substitution would imply ΔE = 0 when changing from M to M’
Hipothetical compounds optimized at the B3LYP/6-31G(d,p) level of theory
The calculated ΔE were: ΔE(3/3′) = -81.38 kcal/mol; ΔE(4/4′) = 40.61 kcal/mol; ΔE(5/5′) = 70.98 kcal/mol
In all, the increased stability and higher covalent character of the Ga-O-Ga unit compared to that of the Al analogue favors the formation of different sized rings.
Additionally, a free energy change analysis was performed to assess the relative stability between compounds. Changes in free energy can be obtained easily from the thermochemistry section in the FREQ calculation from Gaussian.
This paper is published in Inorganic Chemistry under the following citation: Erandi Bernabé-Pablo, Vojtech Jancik, Diego Martínez-Otero, Joaquín Barroso-Flores, and Mónica Moya-Cabrera* “Molecular Group 13 Metallaborates Derived from M−O−M Cleavage Promoted by BH3” Inorg. Chem. 2017, 56, 7890−7899
The second paper deals with heavier atoms and the bonds the formed around Yttrium complexes with triazoles, for which we calculated a more detailed distribution of the electronic density and concluded that the coordination of Cp to Y involves a high component of ionic character.
This paper is published in Ana Cristina García-Álvarez, Erandi Bernabé-Pablo, Joaquín Barroso-Flores, Vojtech Jancik, Diego Martínez-Otero, T. Jesús Morales-Juárez, Mónica Moya-Cabrera* “Multinuclear rare-earth metal complexes supported by chalcogen-based 1,2,3-triazole” Polyhedron 135 (2017) 10-16
We keep working on other projects and I hope we keep on doing so for the foreseeable future because those main group metals have been in my blood all this century. Thanks and a big shoutout to Dr. Monica Moya for keeping me in her highly productive and competitive team of researchers; here is to many more years of joint work.
All you wanted to know about Hybrid Orbitals…
… but were afraid to ask
How I learned to stop worrying and not caring that much about hybridization.
The math behind orbital hybridization is fairly simple as I’ll try to show below, but first let me give my praise once again to the formidable Linus Pauling, whose creation of this model built a bridge between quantum mechanics and chemistry; I often say Pauling was the first Quantum Chemist (Gilbert N. Lewis’ fans, please settle down). Hybrid orbitals are therefore a way to create a basis that better suits the geometry formed by the bonds around a given atom and not the result of a process in which atomic orbitals transform themselves for better sterical fitting, or like I’ve said before, the C atom in CH4 is sp3 hybridized because CH4 is tetrahedral and not the other way around. Jack Simmons put it better in his book:
2017-08-09 20.29.45
Taken from “Quantum Mechanics in Chemistry” by Jack Simmons
The atomic orbitals we all know and love are the set of solutions to the Schrödinger equation for the Hydrogen atom and more generally they are solutions to the hydrogen-like atoms for which the value of Z in the potential term of the Hamiltonian changes according to each element’s atomic number.
Since the Hamiltonian, and any other quantum mechanical operator for that matter, is a Hermitian operator, any given linear combination of wave functions that are solutions to it, will also be an acceptable solution. Therefore, since the 2s and 2p valence orbitals of Carbon do not point towards the edges of a tetrahedron they don’t offer a suitable basis for explaining the geometry of methane; even more so these atomic orbitals are not degenerate and there is no reason to assume all C-H bonds in methane aren’t equal. However we can come up with a linear combination of them that might and at the same time will be a solution to the Schrödinger equation of the hydrogen-like atom.
Ok, so we need four degenerate orbitals which we’ll name ζi and formulate them as linear combinations of the C atom valence orbitals:
ζ1a12s + b12px + c12py + d12pz
ζ2a22s + b22px + c22py + d22pz
ζ3a32s + b32px + c32py + d32pz
ζ4a42s + b42px + c42py + d42pz
to comply with equivalency lets set a1 = a2 = a3 = a4 and normalize them:
a12 + a22 + a32 + a42 = 1 ∴ ai = 1/√4
Lets take ζ1 to be directed along the z axis so b1 = c1 = 0
ζ= 1/√4(2s) + d12pz
since ζ1 must be normalized the sum of the squares of the coefficients is equal to 1:
1/4 + d12 = 1;
d1 = √3/2
Therefore the first hybrid orbital looks like:
ζ1 = 1/√4(2s) +√3/2(2pz)
We now set the second hybrid orbital on the xz plane, therefore c2 = 0
ζ2 = 1/√4(2s) + b22px + d22pz
since these hybrid orbitals must comply with all the conditions of atomic orbitals they should also be orthonormal:
ζ1|ζ2〉 = δ1,2 = 0
1/4 + d2√3/2 = 0
d2 = –1/2√3
our second hybrid orbital is almost complete, we are only missing the value of b2:
ζ2 = 1/√4(2s) +b22px +-1/2√3(2pz)
again we make use of the normalization condition:
1/4 + b22 + 1/12 = 1; b2 = √2/√3
Finally, our second hybrid orbital takes the following form:
ζ2 = 1/√4(2s) +√2/√3(2px) –1/√12(2pz)
The procedure to obtain the remaining two hybrid orbitals is the same but I’d like to stop here and analyze the relative direction ζ1 and ζ2 take from each other. To that end, we take the angular part of the hydrogen-like atomic orbitals involved in the linear combinations we just found. Let us remember the canonical form of atomic orbitals and explicitly show the spherical harmonic functions to which the 2s, 2px, and 2pz atomic orbitals correspond:
ψ2s = (1/4π)½R(r)
ψ2px = (3/4π)½sinθcosφR(r)
ψ2pz = (3/4π)½cosθR(r)
we substitute these in ζ2 and factorize R(r) and 1/√(4π)
ζ2 = (R(r)/√(4π))[1/√4 + √2 sinθcosφ –√3/√12cosθ]
We differentiate ζ2 respect to θ, and set it to zero to find the maximum value of θ respect to the z axis we get the angle between the first to hybrid orbitals ζ1 and ζ2 (remember that ζ1 is projected entirely over the z axis)
dζ2/dθ = (R(r)/√(4π))[√2 cosθ –√3/√12sinθ] = 0
sinθ/cosθ = tanθ = -√8
θ = -70.53°,
but since θ is measured from the z axis towards the xy plane this result is equivalent to the complementary angle 180.0° – 70.53° = 109.47° which is exactly the angle between the C-H bonds in methane we all know! and we didn’t need to invoke the unpairing of electrons in full orbitals, their promotion of any electron into empty orbitals nor the ‘reorganization‘ of said orbitals into new ones. Orbital hybridization is nothing but a mathematical tool to find a set of orbitals which comply with the experimental observation and that is the important thing here!
To summarize, you can take any number of orbitals and build any linear combination you want, in order to comply with the observed geometry. Furthermore, no matter what hybridization scheme you follow, you still take the entire orbital, you cannot take half of it because they are basis functions. That is why you should never believe that any atom exhibits something like an sp2.5 hybridization just because their bond angles lie between 109 and 120°. Take a vector v = xi+yj+zk, even if you specify it to be v = 1/2i that means x = 1/2, not that you took half of the unit vector i, and it doesn’t mean you took nothing of j and k but rather than y = z = 0.
This was a very lengthy post so please let me know if you read it all the way through by commenting, liking, or sharing. Thanks for reading.
No, seriously, why can’t orbitals be observed?
#CompChem – Can Orbitals Be Directly Observed?
The Gossip Approach to Scientific Writing
Communication of scientific findings is an essential skill for any scientist, yet it’s one of those things some students are reluctant to do partially because of the infamous blank page scare. Once they are confronted to writing their thesis or papers they make some common mistakes like for instance not thinking who their audience is or not adhering to the main points. One of the the highest form of communication, believe it or not, is gossip, because gossip goes straight to the point, is juicy (i.e. interesting) and seldom needs contextualization i.e. you deliver it just to the right audience (that’s why gossiping about friends to your relatives is almost never fun) and you do it at the right time (that’s the difference between gossips and anecdotes). Therefore, I tell my students to write as if they were gossiping; treat your research in a good narrative way, because a poor narrative can make your results be overlooked.
I’ve read too many theses in which conclusions are about how well the methods work, and unless your thesis has to do with developing a new method, that is a terrible mistake. Methods work well, that is why they are established methods.
Take the following example for a piece of gossip: Say you are in a committed monogamous relationship and you have the feeling your significant other is cheating on you. This is your hypothesis. This hypothesis is supported by their strange behavior, that would be the evidence supporting your hypothesis; but be careful because there could also be anecdotal evidence which isn’t significant to your own as in the spouse of a friend had this behavior when cheating ergo mine is cheating too. The use of anecdotal evidence to support a hypothesis should be avoided like the plague. Then, you need an experimental setup to prove, or even better disprove, your hypothesis. To that end you could hack into your better half’s email, have them followed either by yourself or a third party, confronting their friends, snooping their phone, just basically about anything that might give you some information. This is the core of your research: your data. But data is meaningless without a conclusion, some people think data should speak for itself and let each reader come up with their own conclusions so they don’t get biased by your own vision and while there is some truth to that, your data makes sense in a context that you helped develop so providing your own conclusions is needed or we aren’t scientists but stamp collectors.
This is when most students make a terrible mistake because here is where gossip skills come in handy: When asked by friends (peers) what was it that you found out, most students will try to convince them that they knew the best algorithms for hacking a phone or that they were super conspicuous when following their partners or even how important was the new method for installing a third party app on their phones to have a text message sent every time their phone when outside a certain area, and yeah, by the way, I found them in bed together. Ultimately their question is left unanswered and the true conclusion lies buried in a lengthy boring description of the work performed; remember, you performed all that work to reach an ultimate goal not just for the sake of performing it.
Writers say that every sentence in a book should either move the story forward or show character; in the same way, every section of your scientific written piece should help make the point of your research, keep the why and the what distinct from the how, and don’t be afraid about treating your research as the best piece of gossip you’ve had in years because if you are a science student it is.
Some Comp.Chem. Tweeps
Dealing with Spin Contamination
%d bloggers like this: |
a713b3f777bb85fc | måndag 31 mars 2014
Planck's Constant = Human Convention Standard Frequency vs Electronvolt
The recent posts on the photoelectric effect exhibits Planck's constant $h$ as a conversion standard between the units of light frequency $\nu$ in $Hz\, = 1/s$ as periods per second and electronvolt ($eV$), expressed in Einstein's law of photoelectricity:
• $h\times (\nu -\nu_0) = eU$,
where $\nu_0$ is smallest frequency producing a photoelectric current, $e$ is the charge of an electron and $U$ the stopping potential in Volts $V$ for which the current is brought to zero for $\nu > \nu_0$. Einstein obtained, referring to Lenard's 1902 experiment with $\nu -\nu_0 = 1.03\times 10^{15}\, Hz$ corresponding to the ultraviolet limit of the solar spectrum and $U = 4.3\, V$
• $h = 4.17\times 10^{-15} eVs$
to be compared with the reference value $4.135667516(91)\times 10^{-15}\, eV$ used in Planck's radiation law. We see that here $h$ occurs as a conversion standard between Hertz $Hz$ and electronvolt $eV$ with
• $1\, Hz = 4.17\times 10^{-15}\, eV$
To connect to quantum mechanics, we recall that Schrödinger's equation is normalized with $h$ so that the first ionization energy of Hydrogen at frequency $\nu = 3.3\times 10^{15}\, Hz$ equals $13.6\, eV$, to be compared with $3.3\times 4.17 = 13.76\, eV$ corresponding to Lenard's photoelectric experiment.
We understand that Planck's constant $h$ can be seen as a conversion standard between light energy measured by frequency and electron energy measured in electronvolts. The value of $h$ can then be determined by photoelectricity and thereafter calibrated into Schrödinger's equation to fit with ionization energies as well as into Planck's law as a parameter in the high-frequency cut-off (without a very precise value). The universal character of $h$ as a smallest unit of action is then revealed to simply be a human convention standard without physical meaning. What a disappointment!
• Planck's constant was introduced as a fundamental scale in the early history of quantum mechanics. We find a modern approach where Planck's constant is absent: it is unobservable except as a constant of human convention.
Finally: It is natural to view frequency $\nu$ as a measure of energy per wavelength, since radiance as energy per unit of time scales with $\nu\times\nu$ in accordance with Planck's law, which can be viewed as $\nu$ wavelengths each of energy $\nu$ passing a specific location per unit of time. We thus expect to find a linear relation between frequency and electronvolt as two energy scales: If 1 € (Euro) is equal to 9 Skr (Swedish Crowns), then 10 € is equal to 90 Skr.
söndag 30 mars 2014
Photoelectricity: Millikan vs Einstein
The American physicist Robert Millikan received the Nobel Prize in 1923 for (i) experimental determination of the charge $e$ of an electron and (ii) experimental verification of Einstein's law of photoelectricity awarded the 1921 Prize.
Millikan started out his experiments on photoelectricity with the objective of disproving Einstein's law and in particular the underlying idea of light quanta. To his disappointment Millikan found that according to his experiments Einstein's law in fact was valid, but he resisted by questioning the conception of light-quanta even in his Nobel lecture:
• In view of all these methods and experiments the general validity of Einstein’s equation is, I think, now universally conceded, and to that extent the reality of Einstein’s light-quanta may be considered as experimentally established.
• But the conception of localized light-quanta out of which Einstein got his equation must still be regarded as far from being established.
• Whether the mechanism of interaction between ether waves and electrons has its seat in the unknown conditions and laws existing within the atom, or is to be looked for primarily in the essentially corpuscular Thomson-Planck-Einstein conception as to the nature of radiant energy is the all-absorbing uncertainty upon the frontiers of modern Physics.
Millikan's experiments consisted in subjecting a metallic surface to light of different frequencies $\nu$ and measuring the resulting photovoltic current determining a smallest frequency $\nu_0$ producing a current and (negative) stopping potential required to bring the current to zero for frequencies $\nu >\nu_0$. Millikan thus measured $\nu_0$ and $V$ for different frequencies $\nu > \nu_0$ and found a linear relationship between $\nu -\nu_0$ and $V$, which he expressed as
• $\frac{h}{e}(\nu -\nu_0)= V$,
in terms of the charge $e$ of an electron which he had already determined experimentally, and the constant $h$ which he determined to have the value $6.57\times 10^{-34}$. The observed linear relation between $\nu -\nu_0$ and $V$ could then be expressed as
• $h\nu = h\nu_0 +eV$
which Millikan had to admit was nothing but Einstein's law with $h$ representing Planck's constant.
But Millikan could argue that, after all, the only thing he had done was to establish a macroscopic linear relationship between $\nu -\nu_0$ and $V$, which in itself did not give undeniable evidence of the existence of microscopic light-quanta. What Millikan did was to measure the current for different potentials of the plus pole receiving the emitted electrons under different exposure to light and thereby discovered a linear relationship between frequency $\nu -\nu_0$ and stopping potential $V$ independent of the intensity of the light and properties of the metallic surface.
By focussing on frequency and stopping potential Millikan could make his experiment independent of the intensity of incoming light and of the metallic surface, and thus capture a conversion between light energy and electron energy of general significance.
But why then should stopping potential $V$ scale with frequency $\nu - \nu_0$, or $eV$ scale with frequency $h(\nu - \nu_0)$? Based on the analysis on Computational Blackbody Radiation the answer would be that $h\nu$ represents a threshold energy for emission of radiation in Planck's radiation law and $eV$ represents a threshold energy for emission of electrons, none of which would demand light quanta.
lördag 29 mars 2014
Einstein: Genius by Definition of Law of Photoelectricity
• $h\nu = h\nu_0 + eV$
• It is the theory which decides what we can observe.
torsdag 27 mars 2014
How to Make Schrödinger's Equation Physically Meaningful + Computable
The derivation of Schrödinger's equation as the basic mathematical model of quantum mechanics is hidden in mystery: The idea is somehow to start considering a classical Hamiltonian $H(q,p)$ as the total energy equal to the sum of kinetic and potential energy:
• $H(q,p)=\frac{p^2}{2m} + V(q)$,
where $q(t)$ is position and $p=m\dot q= m\frac{dq}{dt}$ momentum of a moving particle of mass $m$, and make the formal ad hoc substitution with $\bar h =\frac{h}{2\pi}$ and $h$ Planck's constant:
• $p = -i\bar h\nabla$ with formally $\frac{p^2}{2m} = - \frac{\bar h^2}{2m}\nabla^2 = - \frac{\bar h^2} {2m}\Delta$,
to get Schrödinger's equation in time dependent form
with now $H$ a differential operator acting on a wave function $\psi (x,t)$ with $x$ a space coordinate and $t$ time, given by
• $H\psi \equiv -\frac{\bar h^2}{2m}\Delta \psi + V\psi$,
where now $V(x)$ acts as a given potential function. As a time independent eigenvalue problem Schrödinger's equation then takes the form:
• $-\frac{\bar h^2}{2m}\Delta \psi + V\psi = E\psi$,
with $E$ an eigenvalue, as a stationary value for the total energy
• $K(\psi ) + W(\psi )\equiv\frac{\bar h^2}{2m}\int\vert\nabla\psi\vert^2\, dx +\int V\psi^2\, dx$,
as the sum of kinetic energy $K(\psi )$ and potential energy $W(\psi )$, under the normalization $\int\psi^2\, dx = 1$. The ground state then corresponds to minimal total energy,
We see that the total energy $K(\psi ) + W(\psi)$ can be seen as smoothed version of $H(q,p)$ with
• $V(q)$ replaced by $\int V\psi^2\, dx$,
• $\frac{p^2}{2m}=\frac{m\dot q^2}{2}$ replaced by $\frac{\bar h^2}{2m}\int\vert\nabla\psi\vert^2\, dx$,
and Schrödinger's equation as expressing stationarity of the total energy as an analog the classical equations of motion expressing stationarity of the Hamiltonian $H(p,q)$ under variations of the path $q(t)$.
We conclude that Schrödinger's equation for a one electron system can be seen as a smoothed version of the equation of motion for a classical particle acted upon by a potential force, with Planck's constant serving as a smoothing parameter.
Similarly it is natural to consider smoothed versions of classical many-particle systems as quantum mechanical models resembling Hartree variants of Schrödinger's equation for many-electrons systems, that is quantum mechanics as smoothed particle mechanics, thereby (maybe) reducing some of the mystery of Schrödinger's equation and opening to computable quantum mechanical models.
We see Schrödinger's equation arising from a Hamiltonian as total energy kinetic energy + potential energy, rather that from a Lagrangian as kinetic energy - potential energy. The reason is a confusing terminology with $K(\psi )$ named kinetic energy even though it does not involve time differentiation, while it more naturally should occur in a Lagrangian as a form of the potential energy like elastic energy in classical mechanics.
onsdag 26 mars 2014
New Paradigm of Computational Quantum Mechanics vs ESS
ESS as European Spallation Source is a €3 billion projected research facility captured by clever Swedish politicians to be allocated to the plains outside the old university town Lund in Southern Sweden with start in 2025: Neutrons are excellent for probing materials on the molecular level – everything from motors and medicine, to plastics and proteins. ESS will provide around 30 times brighter neuutron beams than existing facilities today. The difference between the current neutron sources and ESS is something like the difference between taking a picture in the glow of a candle, or doing it under flash lighting.
Quantum mechanics was invented in the 1920s under limits of pen and paper computation but allowing limitless theory thriving in Hilbert spaces populated by multidimensional wave functions described by fancy symbols on paper. Lofty theory and sparse computation was compensated by inflating the observer role of the physicist to a view that only physics observed by a physicist was real physics, with extra support from a conviction that the life or death of Schrödinger's cat depended more on the observer than on the cat and that supercolliders are very expensive. The net result was (i) uncomputable limitless theory combined with (ii) unobservable practice as the essence of the Copenhagen Interpretation filling text books.
Today the computer opens to a change from impossibility to possibility, but this requires a fundamental change of the mathematical models from uncomputable to computable non-linear systems of 3d of Hartree-Schrödinger equations (HSE) or Density Functional Theory (DFT). This brings theory and computation together into a new paradigm of Computational Quantum Mechanics CQM shortly summarized as follows:
1. Experimental inspection of microscopic physics difficult/impossible.
2. HSE-DFT for many-particle systems are solvable computationally.
3. HSE-DFT simulation allows detailed inspection of microscopics.
4. Assessment of HSE simulations can be made by comparing macroscopic outputs with observation.
The linear multidimensional Schrödinger equation has no meaning in CQM and a new foundation is asking to be developed. The role of observation in the Copenhagen Interpretation is taken over by computation in CQM: Only computable physics is real physics, at least if physics is a form of analog computation, which may well be the case. The big difference is that anything computed can be inspected and observed, which opens to non-destructive testing with only limits set by computational power.
The Large Hadron Collider (LHC) and the projected neutron collider European Spallation Source (ESS) in Lund in Sweden represent the old paradigm of smashing to pieces the fragile structure under investigation and as such may well be doomed.
tisdag 25 mars 2014
Fluid Turbulence vs Quantum Electrodynamics
Horace Lamb (1849 - 1934) author of the classic text HydrodynamicsIt is asserted that the velocity of a body not acted on by any force will be constant in magnitude and direction, whereas the only means of ascertaining whether a body is, or is not, free from the action of force is by observing whether its velocity is constant.
There is famous quote by the British applied mathematician Horace Lamb summarizing the state of classical fluid mechanics and the new quantum mechanics in 1932 as follows:
Concerning the turbulent motion of fluids I am happy to report that this matter is now largely resolved by computation, as made clear in the article New Theory of Flight soon to be delivered for publication in Journal of Mathematical Fluid Mechanics, with lots of supplementary material on The Secret of Flight. This gives good hope that the other problem of quantum electrodynamics can likewise be unlocked by viewing The World as Computation:
• In a time of turbulence and change, it is more true than ever that knowledge is power. (JFK)
Quantum Physics as Digital Continuum Physics
Quantum mechanics was born in 1900 in Planck's theoretical derivation of a modification of Rayleigh-Jeans law of blackbody radiation based on statistics of discrete "quanta of energy" of size $h\nu$, where $\nu$ is frequency and $h =6.626\times 10^{-34}\, Js$ is Planck's constant.
This was the result of a long fruitless struggle to explain the observed spectrum of radiating bodies using deterministic eletromagnetic wave theory, which ended in Planck's complete surrender to statistics as the only way he could see to avoid the "ultraviolet catastrophe" of infinite radiation energies, in a return to the safe haven of his dissertation work in 1889-90 based on Boltzmann's statistical theory of heat.
Planck described the critical step in his analysis of a radiating blackbody as a discrete collection of resonators as follows:
• We must now give the distribution of the energy over the separate resonators of each frequency, first of all the distribution of the energy $E$ over the $N$ resonators of frequency . If E is considered to be a continuously divisible quantity, this distribution is possible in infinitely many ways.
• We consider, however this is the most essential point of the whole calculation $E$ to be composed of a well-defined number of equal parts and use thereto the constant of nature $h = 6.55\times 10^{-27}\, erg sec$. This constant multiplied by the common frequency of the resonators gives us the energy element in $erg$, and dividing $E$ by we get the number $P$ of energy elements which must be divided over the $N$ resonators.
• If the ratio thus calculated is not an integer, we take for $P$ an integer in the neighbourhood. It is clear that the distribution of P energy elements over $N$ resonators can only take place in a finite, well-defined number of ways.
We here see Planck introducing a constant of nature $h$, later referred to as Planck's constant, with a corresponding smallest quanta of energy $h\nu$ for radiation (light) of frequency $\nu$.
Then Einstein entered in 1905 with a law of photoelectricity with $h\nu$ viewed as the energy of a light quanta of frequency $\nu$ later named photon and crowned as an elementary particle.
Finally, in 1926 Schrödinger formulated a wave equation for involving a formal momentum operator $-ih\nabla$ including Planck's constant $h$, as the birth of quantum mechanics, as the incarnation of modern physics based on postulating that microscopic physics is
1. "quantized" with smallest quanta of energy $h\nu$,
2. indeterministic with discrete quantum jumps obeying laws of statistics.
However, microscopics based on statistics is contradictory, since it requires microscopics of microscopics in an endeless regression, which has led modern physics into an impasse of ever increasing irrationality into manyworlds and string theory as expressions of scientific regression to microscopics of microscopics. The idea of "quantization" of the microscopic world goes back to the atomism of Democritus, a primitive scientific idea rejected already by Aristotle arguing for the continuum, which however combined with modern statistics has ruined physics.
But there is another way of avoiding the ultraviolet catastrophe without statistics, which is presented on Computational Blackbody Radiation with physics viewed as analog finite precision computation which can be modeled as digital computational simulation
This is physics governed by deterministic wave equations with solutions evolving in analog computational processes, which can be simulated digitally. This is physics without microscopic games of roulette as rational deterministic classical physics subject only to natural limitations of finite precision computation.
This opens to a view of quantum physics as digital continuum physics which can bring rationality back to physics. It opens to explore an analog physical atomistic world as a digital simulated world where the digital simulation reconnects to analog microelectronics. It opens to explore physics by exploring the digital model, readily available for inspection and analysis in contrast to analog physics hidden to inspection.
The microprocessor world is "quantized" into discrete processing units but it is a deterministic world with digital output:
måndag 24 mars 2014
Hollywood vs Principle of Least Action
The fictional character of the Principle of Least Action viewed to serve a fundamental role in physics, can be understood by comparing with making movies:
The dimension of action as energy x time comes out very naturally in movie making as actor energy x length of the scene. However, outside Hollywood a quantity of dimension energy x time is questionable from physical point of view, since there seems to be no natural movie camera which can record and store such a quantity.
söndag 23 mars 2014
• $h\nu = W + E$,
with the formal connection
The Torturer's Dilemma vs Uncertainty Principle vs Computational Simulation
Bohr expressed in Light and Life (1933) the Thantalogical Principle stating that to check out the nature of something, one has to destroy that very nature, which we refer to as The Torturer's Dilemma:
• We should doubtless kill an animal if we tried to carry the investigations of its organs so far that we could describe the role played by single atoms in vital functions. In every experiment on living organisms, there must remain an uncertainty as regards the physical conditions to which they are subjected…the existence of life must be considered as an elementary fact that cannot be explained, but must be taken as a starting point in biology, in a similar way as the quantum of action, which appears as an irrational element from the point of view of classical mechanics, taken together with the existence of the elementary particles, forms the foundation of atomic physics.
• It has turned out, in fact, that all effects of light may be traced down to individual processes, in which a so-calles light quantum is exchanged, the energy of which is equal to the product of the frequency of the electromagnetic oscillations and the universal quantum of action, or Planck's constant. The striking contrast between this atomicity of the light phenomenon and the continuity of of the energy transfer according to the electromagnetic theory, places us before a dilemma of a character hitherto unknwown in physics.
Bohr's starting point for his "Copenhagen" version of quantum mechanics still dominating text books, was:
• Planck's discovery of the universal quantum of action which revealed a feature of wholeness in individual atomic processes defying casual description in space and time.
• Planck's discovery of the universal quantum of action taught us that the wide applicability of the accustomed description of the behaviour of matter in bulk rests entirely on the circumstance that the action involved in phenomena on the ordinary scale is so large that the quantum can be completely neglected. (The Connection Between the Sciences, 1960)
Bohr thus argued that the success of the notion of universal quantum of action depends on the fact that in can be completely neglected.
The explosion of digital computation since Bohr's time offers a new way of resolving the impossibility of detailed inspection of microscopics, by a allowing detailed non-invasive inspection of computational simulation of microscopics. With this perspective efforts should be directed to development of computable models of microscopics, rather than smashing high speed protons or neutrons into innocent atoms in order to find out their inner secrets, without getting reliable answers.
lördag 22 mars 2014
The True Meaning of Planck's Constant as Measure of Wavelength of Maximal Radiance and Small-Wavelength Cut-off.
The modern physics of quantum mechanics was born in 1900 when Max Planck after many unsuccessful attempts in an "act of despair" introduced a universal smallest quantum of action $h= 6.626\times 10^{-34}\, Js = 4.12\times 10^{-15}\, eVs$ named Planck's constant in a theoretical justification of the spectrum of radiating bodies observed in experiments, based on statistics of packets of energy of size $h\nu$ with $\nu$ frequency.
Planck describes this monumental moment in the history of science in his 1918 Nobel Lecture as follows:
Planck thus finally succeeded to prove Planck's radiation law as a modification of Rayleigh-Jeans law with a high-frequency cut-off factor eliminating "the ultraviolet catastrophe" which had paralyzed physics shortly after the introduction of Maxwell's wave equations for electromagnetics as the culmination of classical physics.
Planck's constant $h$ enters Planck's law
• $I(\nu ,T)=\gamma \theta (\nu , T)\nu^2 T$, where $\gamma =\frac{2k}{c^2}$,
where $I(\nu ,T)$ is normalized radiance, as a parameter in the multiplicative factor
• $\theta (\nu ,T)=\frac{\alpha}{e^{\alpha} -1}$,
where $\nu$ is frequency, $T$ temperature in Kelvin $K$ and $k = 1.38\times 10^{-23}\, J/K = 8.62\times 10^{-5}\, eV/K$ is Boltzmann's constant and $c\, m/s$ the speed of light.
We see that $\theta (\nu ,T)\approx 1$ for small $\alpha$ and enforces a high-frequency small-wavelength cut-off for $\alpha > 10$, that is, for
• $\nu > \nu_{max}\approx \frac{10T}{\hat h}$ where $\hat h =\frac{h}{k}=4.8\times 10^{-11}\, Ks$,
• $\lambda < \lambda_{min}\approx \frac{c}{10T}\hat h$ where $\nu\lambda =c$,
with maximal radiance occuring for $\alpha = 2.821$ in accordance with Wien's displacement law. With $T = 1000\, K$ the cut-off is in the visible range for $\nu\approx 2\times 10^{14}$ and $\lambda\approx 10^{-6}\, m$. We see that the relation
• $\frac{c}{10T}\hat h =\lambda_{min}$,
gives $\hat h$ a physical meaning as measure of wave-length of maximal radiance and small-wavelength cut-off of atomic size scaling with $\frac{c}{T}$.
Modern physicsts are trained to believe that Planck's constant $h$ as the universal quantum of action represents a smallest unit of a "quantized" world with a corresponding Planck length $l_p= 1.62\times 10^{-35}$ as a smallest unit of length, about 20 orders of magnitude smaller than the proton diameter.
We have seen that Planck's constant enters in Planck's radiation law in the form $\hat h =\frac{h}{k}$, and not as $h$, and that $\hat h$ has the role of setting a small-wavelength cut-off scaling with $\frac{c}{T}$.
Small-wavelength cut-off in the radiation from a body is possible to envision in wave mechanics as an expression of finite precision analog computation. In this perspective Planck's universal quantum of action emerges as unnecessary fiction about exceedingly small quantities beyond reason and reality.
torsdag 20 mars 2014
Principle of Least Action vs Adam Smith's Invisible Hand
Violation of the PLA of the capitalistic system in 1929.
The Principle of Least Action (PLA) expressing
• Stationarity of the Action (the integral in time of the Lagrangian),
with the Lagrangian the difference between kinetic and potential energies, is cherished by physicists as a deep truth about physics: Tell me the Lagrangian and I will tell you the physics, because a dynamical system will (by reaction to local forces) evolve so as to keep the Action stationary as if led by an invisible hand steering the system towards a final cause of least action.
PLA is similar to the invisible hand of Adam Smith supposedly steering an economy towards a final cause of maximal effectivity or least action (maximal common happiness) by asking each member of the economy to seek to maximize individual profit (individual happiness). This is the essence of the capitalistic system. The idea is that a final cause of maximal effectivity can be reached without telling the members the meaning of the whole thing, just telling each one to seek to maximize his/her own individual profit (happiness).
Today the capitalistic system is shaking and nobody knows how to steer towards a final cause of maximal efficiency. So the PLA of economy seems to be rather empty of content. It may be that similarly the PLA of physics is void of real physics. In particular, the idea of a smallest quantum of action as a basis of quantum mechanics, may well be unphysical.
Till Per-Anders Ivert Redaktör för SMS-Bulletinen
Jag har skickat följande inlägg till Svenska Matematikersamfundets medlemsblad Bulletinen med anledning av redaktör Per-Anders Iverts inledande ord i februarinummret 2014.
Till SMS-Bulletinen
Redaktör Per-Anders Ivert inleder februarinummret av Bulletinen med: "Apropå reaktioner; det kommer sällan sådana, men jag uppmärksammades på en rolig reaktion på något jag skrev för några nummer sedan och som handlade om huruvida skolmatematiken behövs. En jeppe från Chalmers, en person som jag inte känner och tror att jag aldrig varit i kontakt med, skrev på sin blogg":
• Oktobernumret av Svenska Matematikersamfundets Bulletin tar upp frågan om skolmatematiken ”behövs”.
• Ordförande Per- Anders Ivert inleder med Själv kan jag inte svara på vad som behövs och inte behövs. Det beror på vad man menar med ”behövs” och även på hur skolmatematiken ser ut.
• Ulf Persson följer upp med en betraktelse som inleds med: Det tycks vara ett faktum att en stor del av befolkningen avskyr matematik och finner skolmatematiken plågsam.
• Ivert och Persson uttrycker den vilsenhet, och därav kommande ångest, som präglar matematikerns syn på sitt ämnes roll i skolan av idag: Yrkesmatematikern vet inte om skolmatematiken längre ”behövs” och då vet inte skolmatematikern och eleven det heller.
Ivert fortsätter med:
• "När jag såg detta blev jag rätt förvånad. Jag trodde att mina citerade ord var fullkomligt okontroversiella, och jag förstod inte riktigt vad som motiverade sarkasmen ”ordförande”. Den här Chalmersliraren trodde nog inte att jag var ordförande för Samfundet, utan det ska väl föreställa någon anspelning på östasiatiska politiska strukturer".
• "Vid en närmare läsning såg jag dock att Ulf Persson hade kritiserat den här bloggaren i sin text, vilket tydligen hade lett till en mental kortslutning hos bloggaren och associationerna hade börjat gå kors och tvärs. Om man vill fundera över min ”vilsenhet och ångest” så bjuder jag på en del underlag i detta nummer".
Iverts utläggning om "jeppe på Chalmers" och "Chalmerslirare" skall ses mot bakgrund av det öppna brev till Svenska Matematikersamfundet of Nationalkommitten för Matematik, som jag publicerade på min blogg 22 dec 2013, och där jag efterfrågade vilket ansvar Samfundet och Kommitten har för matematikundervisningen i landet, inklusive skolmatematiken och det pågående Matematiklyftet.
Trots ett flertal påminnelser har jag inte fått något svar varken från Samfundet (ordf Pär Kurlberg) eller Kommitten (Torbjörn Lundh) eller KVA-Matematik (Nils Dencker), och jag ställer denna fråga än en gång nu direkt till Dig Per-Anders Ivert: Om Du och Samfundet inte har drabbats av någon "vilsenhet och ångest" så måste Du kunna ge ett svar och publicera detta tillsammans med detta mitt inlägg i nästa nummer av Bulletinen.
Med anledning av Ulf Perssons inlägg under Ordet är mitt, kan man säga att det som räknas vad gäller kunskap är skillnad i kunskap: Det alla kan har ringa intresse. En skola som främst satsar på att ge alla en gemensam baskunskap, vad den än må vara, har svårt att motivera eleverna och är förödande både de många som inte uppnår de gemensamma målen och för de något färre som skulle kunna prestera mycket bättre. Sålänge Euklidisk geometri och latin var reserverade för liten del av eleverna, kunde motivation skapas och studiemål uppnås, tämligen oberoende av intellektuell kapacitet och social bakgrund hos elever (och lärare). Matematiklyftet som skall lyfta alla, är ett tomt slag i luften till stora kostnader.
Epiteten om min person i Bulletinen har nu utvidgats från "Johnsonligan" till "jeppe på Chalmers" och "Chalmerslirare", det senare kanske inte längre så aktuellt då jag flyttade till KTH för 7 år sedan. Per-Anders ondgör sig över språklig förflackning, men där ingår uppenbarligen inte "jeppe", "lirare" och "mental kortslutning".
Claes Johnson
prof em i tillämpad matematik KTH
onsdag 19 mars 2014
Lagrange's Biggest Mistake: Least Action Principle Not Physics!
The basic idea goes back to Leibniz:
And to Maupertis (1746):
• There are no particles or quanta. All is waves.
Physics as Analog Computation instead of Physics as Observation
5. Meaning of Heisenberg's Uncertainity Principle.
7. Statistical interpretation of Schrödinger's multidimensional wave function.
8. Meaning of Bohr's Complementarity Principle.
9. Meaning of Least Action Principle.
5. Uncertainty Principle as effect of finite precision computation.
6. Statistics replaced by finite precision computation.
tisdag 18 mars 2014
Blackbody as Linear High Gain Amplifier
A blackbody acts as a high gain linear (black) amplifier.
The analysis on Computational Blackbody Radiation (with book) shows that a radiating body can be seen as a linear high gain amplifier with a high-frequency cut-off scaling with noise temperature, modeled by a wave equation with small damping, which after Fourier decomposition in space takes the form of a damped linear oscillator for each wave frequency $\nu$:
• $\ddot u_\nu +\nu^2u_\nu - \gamma\dddot u_\nu = f_\nu$,
where $u_\nu(t)$ is oscillator amplitude and $f_\nu (t)$ signal amplitude of wave frequency $\nu$ with $t$ time, the dot indicates differentiation with respect to $t$, and $\gamma$ is a small constant satisfying $\gamma\nu^2 << 1$ and the frequency is subject to a cut-off of the form $\nu < \frac{T_\nu}{h}$, where
• $T_\nu =\overline{\dot u_\nu^2}\equiv\int_I \dot u_\nu^2(t)\, dt$,
is the (noise) temperature of frequency of $\nu$, $I$ a unit time interval and $h$ is a constant representing a level of finite precision.
The analysis shows under an assumption of near resonance, the following basic relation in stationary state:
• $\gamma\overline{\ddot u_\nu^2} \approx \overline{f_\nu^2}$,
as a consequence of small damping guiding $u_\nu (t)$ so that $\dot u_\nu(t)$ is out of phase with $f_\nu(t)$ and thus "pumps" the system little. The result is that the signal $f_\nu (t)$ is balanced to major part by the oscillator
• $\ddot u_\nu +\nu^2u_\nu$,
and to minor part by the damping
• $ - \gamma\dddot u_\nu$,
• $\gamma^2\overline{\dddot u_\nu^2} \approx \gamma\nu^2 \gamma\overline{\ddot u_\nu^2}\approx\gamma\nu^2\overline{f_\nu^2} <<\overline{f_\nu^2}$.
This means that the blackbody can be viewed to act as an amplifier radiating the signal $f_\nu$ under the small input $-\gamma \dddot u_\nu$, thus with a high gain. The high frequency cut-off then gives a requirement on the temperature $T_\nu$, referred to as noise temperature, to achieve high gain.
Quantum Mechanics from Blackbody Radiation as "Act of Despair"
Max Planck: The whole procedure was an act of despair because a theoretical interpretation (of black-body radiation) had to be found at any price, no matter how high that might be…I was ready to sacrifice any of my previous convictions about physics..For this reason, on the very first day when I formulated this law, I began to devote myself to the task of investing it with true physical meaning.
The textbook history of modern physics tells that quantum mechanics was born from Planck's proof of the universal law of blackbody radiation based on an statistics of discrete lumps of energy or energy quanta $h\nu$, where $h$ is Planck's constant and $\nu$ frequency. The textbook definition of a blackbody is a body which absorbs all, reflects none and re-emits all of incident radiation:
• A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. (Wikipedia)
• Theoretical surface that absorbs all radiant energy that falls on it, and radiates electromagnetic energy at all frequencies, from radio waves to gamma rays, with an intensity distribution dependent on its temperature. (Merriam-Webster)
• An ideal object that is a perfect absorber of light (hence the name since it would appear completely black if it were cold), and also a perfect emitter of light. (Astro Virginia)
• A black body is a theoretical object that absorbs 100% of the radiation that hits it. Therefore it reflects no radiation and appears perfectly black. (Egglescliff)
• A hypothetic body that completely absorbs all wavelengths of thermal radiation incident on it. (Eric Weisstein's World of Physics
But there is something more to a blackbody and that is a the high frequency cut-off, expressed in Wien's displacement law, of the principal form
• $\nu < \frac{T}{\hat h}$,
where $\nu$ is frequency, $T$ temperature and $\hat h$ a Planck constant, stating that only frequencies below the cut-off $\frac{T}{\hat h}$ are re-emitted. Absorbed frequencies above the cut-off will then be stored as internal energy in the body under increasing temperature,
Bodies which absorb all incident radiation made of different materials will have different high-frequency cut-off and an (ideal) blackbody should then be characterized as having maximal cut-off, that is smallest Planck constant $\hat h$, with the maximum taken over all real bodies.
A cavity with graphite walls is used as a reference blackbody defined by the following properties:
1. absorption of all incident radiation
2. maximal cut-off - smallest Planck constant $\hat h\approx 4.8\times 10^{-11}\, Ks$,
and $\hat h =\frac{h}{k}$ is Planck's constant $h$ scaled by Boltzmann's constant $k$.
Planck viewed the high frequency cut-off defined by the Planck constant $\hat h$ to be inexplicable in Maxwell's classical electromagnetic wave theory. In an "act of despair" to save physics from collapse in an "ultraviolet catastrophe", a role which Planck had taken on, Planck then resorted to statistics of discrete energy quanta $h\nu$, which in the 1920s resurfaced as a basic element of quantum mechanics.
But a high frequency cut-off in wave mechanics is not inexplicable, but is a well known phenomenon in all forms of waves including elastic, acoustic and electromagnetic waves, and can be modeled as a disspiative loss effect, where high frequency wave motion is broken down into chaotic motion stored as internal heat energy. For details, see Computational Blackbody Radiation.
It is a mystery why this was not understood by Planck. Science created in an "act of despair" runs the risk of being irrational and flat wrong, and that is if anything the trademark of quantum mechanics based on discrete quanta.
Quantum mechanics as deterministic wave mechanics may be rational and understandable. Quantum mechanics as statistics of quanta is irrational and confusing. All the troubles and mysteries of quantum mechanics emanate from the idea of discrete quanta. Schrödinger had the solution:
• I insist upon the view that all is waves.
• If all this damned quantum jumping were really here to stay, I should be sorry I ever got involved with quantum theory.
But Schrödinger was overpowered by Bohr and Heisenberg, who have twisted the brains of modern physicists with devastating consequences...
måndag 17 mars 2014
Unphysical Combination of Complementary Experiments
Let us take a look at how Bohr in his famous 1927 Como Lecture describes complementarity as a fundamental aspect of Bohr's Copenhagen Interpretation still dominating textbook presentations of quantum mechanics:
• The quantum theory is characterised by the acknowledgment of a fundamental limitation in the classical physical ideas when applied to atomic phenomena. The situation thus created is of a peculiar nature, since our interpretation of the experimental material rests essentially upon the classical concepts.
• Notwithstanding the difficulties which hence are involved in the formulation of the quantum theory, it seems, as we shall see, that its essence may be expressed in the so-called quantum postulate, which attributes to any atomic process an essential discontinuity, or rather individuality, completely foreign to the classical theories and symbolised by Planck's quantum of action.
OK, we learn that quantum theory is based on a quantum postulate about an essential discontinuity symbolised as Planck's constant $h=6.626\times 10^{-34}\, Js$ as a quantum of action. Next we read about necessary interaction between the phenomena under observation and the observer:
• The circumstance, however, that in interpreting observations use has always to be made of theoretical notions, entails that for every particular case it is a question of convenience at what point the concept of observation involving the quantum postulate with its inherent 'irrationality' is brought in.
Next, Bohr emphasizes the contrast between the quantum of action and classical concepts:
• The fundamental contrast between the quantum of action and the classical concepts is immediately apparent from the simple formulas which form the common foundation of the theory of light quanta and of the wave theory of material particles. If Planck's constant be denoted by $h$, as is well known: $E\tau = I \lambda = h$where $E$ and $I$ are energy and momentum respectively, $\tau$ and $\lambda$ the corresponding period of vibration and wave-length.
• In these formulae the two notions of light and also of matter enter in sharp contrast.
• While energy and momentum are associated with the concept of particles, and hence may be characterised according to the classical point of view by definite space-time co-ordinates, the period of vibration and wave-length refer to a plane harmonic wave train of unlimited extent in space and time.
• Just this situation brings out most strikingly the complementary character of the description of atomic phenomena which appears as an inevitable consequence of the contrast between the quantum postulate and the distinction between object and agency of measurement, inherent in our very idea of observation.
Bohr clearly brings out the unphysical aspects of the basic action formula
• $E\tau = I \lambda = h$,
where energy $E$ and momentum $I$ related to particle are combined with period $\tau$ and wave-length $\lambda$ related to wave.
Bohr then seeks to resolve the contradiction by naming it complementarity as an effect of interaction between instrument and object:
• In quantum mechanics, however, evidence about atomic objects obtained by different experimental arrangements exhibits a novel kind of complementary relationship.
• … the notion of complementarity simply characterizes the answers we can receive by such inquiry, whenever the interaction between the measuring instruments and the objects form an integral part of the phenomena.
Bohr's complementarity principle has been questioned by many over the years:
• Bohr’s interpretation of quantum mechanics has been criticized as incoherent and opportunistic, and based on doubtful philosophical premises. (Simon Saunders)
• Despite the expenditure of much effort, I have been unable to obtain a clear understanding of Bohr’s principle of complementarity (Einstein).
Of course an object may have complementary qualities such as e.g. color and weight, which can be measured in different experiments, but it is meaningless to form a new concept as color times weight or colorweight and then desperately seek to give it a meaning.
In the New View presented on Computational Blackbody Radiation the concept of action as e.g position times velocity has a meaning in a threshold condition for dissipation, but is not a measure of a quantity which is carried by a physical object such as mass and energy.
The ruling Copenhagen interpretation was developed by Bohr contributing a complementarity principle and Heisenberg contributing a related uncertainty principle based position times momentum (or velocity) as Bohr's unphysical complementary combination. The uncertainty principle is often expressed as a lower bound on the product of weighted norms of a function and its Fourier transform, and then interpreted as combat between localization in space and frequency or between particle and wave. In this form of the uncertainty principle the unphysical aspect of a product of position and frequency is hidden by mathematics.
The Copenhagen Interpretation was completed by Born's suggestion to view (the square of the modulus of) Schrödinger's wave function as a probability distribution for particle configuration, which in the absence of something better became the accepted way to handle the apparent wave-particle contradiction, by viewing it as a combination of probability wave with particle distribution.
New Uncertainty Principle as Wien's Displacement Law
The recent series of posts based on Computational Blackbody Radiation suggest that Heisenberg's Uncertainty Principle can be understood as a consequence of Wien's Displacement Law expressing high-frequency cut-off in blackbody radiation scaling with temperature according to Planck's radiation law:
• $B_\nu (T)=\gamma\nu^2T\times \theta(\nu ,T)$,
where $B_\nu (T)$ is radiated energy per unit frequency, surface area, viewing angle and second, $\gamma =\frac{2k}{c^2}$ where $k = 1.3806488\times 10^{-23} m^2 kg/s^2 K$ is Boltzmann's constant and $c$ the speed of light in $m/s$, $T$ is temperature in Kelvin $K$,
• $\theta (\nu ,T)=\frac{\alpha}{e^\alpha -1}$,
where $\theta (\nu ,T)\approx 1$ for $\alpha < 1$ and $\theta (\nu ,T)\approx 0$ for $\alpha > 10$ as high frequency cut-off with $h=6.626\times 10^{-34}\, Js$ Planck's constant. More precisely, maximal radiance for a given temperature occurs $T$ for $\alpha \approx 2.821$ with corresponding frequency
• $\nu_{max} = 2.821\frac{T}{\hat h}$ where $\hat h=\frac{h}{k}=4.8\times 10^{-11}\, Ks$,
with a rapid drop for $\nu >\nu_{max}$.
The proof of Planck's Law in Computational Blackbody Radiation explains the high frequency cut-off as a consequence of finite precision computation introducing a dissipative effect damping high-frequencies.
A connection to Heisenbergs Uncertainty Principle can be made by noting that a high-frequency cut-off condition of the form
can be rephrased in the following form connecting to Heisenberg's Uncertainty Principle:
• $u_\nu\dot u_\nu > \hat h$ (New Uncertainty Principle)
where $u_\nu$ is position amplitude, $\dot u_\nu =\nu u_\nu$ is velocity amplitude of a wave of frequency $\nu$ with $\dot u_\nu^2 =T$.
The New Uncertainty Principle expresses that observation/detection of a wave, that is observation/detection of amplitude $u$ and frequency $\nu =\frac{\dot u}{u}$ of a wave, requires
• $u\dot u>\hat h$.
The New Uncertainty Principle concerns observation/detection amplitude and frequency as physical aspects of wave motion, and not as Heisenberg's Uncertainty Principle particle position and wave frequency as unphysical complementary aspects.
söndag 16 mars 2014
Uncertainty Principle, Whispering and Looking at a Faint Star
The recent series of posts on Heisenberg's Uncertainty Principle based on Computational Blackbody Radiation suggests the following alternative equivalent formulations of the principle:
1. $\nu < \frac{T}{\hat h}$,
2. $u_\nu\dot u_\nu > \hat h$,
where $u_\nu$ is position amplitude, $\dot u_\nu =\nu u_\nu$ is velocity amplitude of a wave of frequency $\nu$ with $\dot u_\nu^2 =T$, and $\hat h =4.8\times 10^{-11}Ks$ is Planck's constant scaled with Boltzmann's constant.
Here, 1 represents Wien's displacement law stating that the radiation from a body is subject to a frequency limit scaling with temperature $T$ with the factor $\frac{1}{\hat h}$.
2 is superficially similar to Heisenberg's Uncertainty Principle as an expression of the following physics: In order to detect a wave of amplitude $u$, it is necessary that the frequency $\nu$ of the wave satisfies $\nu u^2>h$. In particular, if the amplitude $u$ is small, then the frequency $\nu$ must be large.
This connects to (i) communication by whispering and (ii) viewing a distant star, both being based on the possibility of detecting small amplitude high-frequency waves.
The standard presentation of Heisenberg's Uncertainty Principle is loaded with contradictions:
• But what is the exact meaning of this principle, and indeed, is it really a principle of quantum mechanics? And, in particular, what does it mean to say that a quantity is determined only up to some uncertainty?
In other words, today there is no consensus on the meaning of Heisenberg's Uncertainty principle. The reason may be that it has no meaning, but that there is an alternative which is meaningful.
Notice in particular that the product of two complementary or conjugate variables such as position and momentum is questionable if viewed as representing a physical quantity, while as threshold it can make sense.
fredag 14 mars 2014
DN Debatt: Offentlighetsprincipen Vittrar Bort genom Plattläggningsparagrafer
Nils Funcke konstaterar på DN Debatt under Offentlighetsprincipen är på väg att vittra bort:
• Den svenska offentlighetsprincipen nöts sakta men säkert ned.
• ..rena plattläggningsparagrafer accepteras…
• Vid EU inträdet 1995 avgav Sverige en deklaration: Offentlighetsprincipen, särskilt rätten att ta del av allmänna handlingar, och grundlagsskyddet för meddelarfriheten, är och förblir grundläggande principer som utgör en del av Sveriges konstitutionella, politiska och kulturella arv.
Ett exempel på plattläggningsparagraf är Högsta Förvaltningdomsstolens nya prejudikat:
• För att en handling skall vara färdigställd och därmed vara upprättad och därmed vara allmän handling, krävs att någon åtgärd vidtas som visar att handlingen är färdigställd.
Med denna nya lagparagraf lägger HFD medborgaren platt på marken under myndigheten som nu själv kan bestämma om och när den åtgärd som enligt myndigheten krävs för färdigställande har vidtagits av myndigheten, eller ej.
torsdag 13 mars 2014
Against Measurement Against Copenhagen: For Rationality and Reality by Computation
Bell poses the following questions:
Increasing Uncertainty about Heisenberg's Uncertainty Principle + Resolution
My mind was formed by studying philosophy, Plato and that sort of thing….The reality we can put into words is never reality itself…The atoms or elementary particles themselves are not real; they form a world of potentialities or possibilities rather than one of things or facts...If we omitted all that is unclear, we would probably be left completely uninteresting and trivial tautologies...
The 2012 article Violation of Heisenberg’s Measurement-Disturbance Relationship by Weak Measurements by Lee A. Rozema et al, informs us:
• The Heisenberg Uncertainty Principle is one of the cornerstones of quantum mechanics.
• In his original paper on the subject, Heisenberg wrote “At the instant of time when the position is determined, that is, at the instant when the photon is scattered by the electron, the electron undergoes a discontinuous change in momentum. This change is the greater the smaller the wavelength of the light employed, i.e., the more exact the determination of the position”.
• The modern version of the uncertainty principle proved in our textbooks today, however, deals not with the precision of a measurement and the disturbance it introduces, but with the intrinsic uncertainty any quantum state must possess, regardless of what measurement (if any) is performed.
• It has been shown that the original formulation is in fact mathematically incorrect.
OK, so we learn that Heisenberg's Uncertainty Principle (in its original formulation presumably) is a cornerstone of quantum physics, which however is mathematically incorrect, and that there is a modern version not concerned with measurement but with an intrinsic uncertainty of an quantum state regardless of measurement. In other words, a corner stone of quantum mechanics has been moved.
• The uncertainty principle (UP) occupies a peculiar position on physics. On the one hand, it is often regarded as the hallmark of quantum mechanics.
• On the other hand, there is still a great deal of discussion about what it actually says.
• A physicist will have much more difficulty in giving a precise formulation than in stating e.g. the principle of relativity (which is itself not easy).
• Moreover, the formulation given by various physicists will differ greatly not only in their wording but also in their meaning.
We learn that the uncertainty of the uncertainty principle has been steadily increasing ever since it was formulated by Heisenberg in 1927.
In a recent series of posts based on Computational Blackbody Radiation I have suggested a new approach to the uncertainty principle as a high-frequency cut-off condition of the form
where $\nu$ is frequency, $T$ temperature in Kelvin $K$ and $\hat h=4.8\times 10^{-11}Ks$ is a scaled Planck's constant, and the significance of the cut-off is that a body of temperature $T\, K$ cannot emit frequencies larger than $\frac{T}{h}$ because the wave synchronization required for emission is destroyed by internal friction damping these frequencies. The cut-off condition thus expresses Wien's displacement law.
The cut-off condition can alternatively be expressed as
where $u_\nu$ is amplitude and $\dot u_\nu =\frac{du_\nu}{dt}$ momentum of a wave of frequency $\nu$ with $\dot u_\nu^2 =T$ and $\dot u_\nu =\nu u_\nu$. We see that the cut-off condition has superficially a form similar to Heisenberg's uncertainty principle, but that the meaning is entirely different and in fact familiar as Wien's displacement law.
We thus find that Heisenberg's uncertainty principle can be replaced by Wien's displacement law, which can be seen as an effect of internal friction preventing synchronization and thus emission of frequencies $\nu > \frac{T}{\hat h}$.
The high-frequency cut-off condition with its dependence on temperature is similar to high-frequency damping of a loud speaker which can depend on the level of the sound.
onsdag 12 mars 2014
Blackbody Radiation as Collective Vibration Synchronized by Resonance
There are two descriptions of the basic phenomenon of a radiation from a heated body (blackbody or greybody radiation) starting from a description of light as a stream of light particles named photons or as electromagnetic waves.
That the particle description of light is both primitive and unphysical was well understood before Einstein in 1905 suggested an explanation of the photoelectric effect based on light as a stream of particles later named photons, stimulated by Planck's derivation of Planck's law in 1900 based on radiation emitted in discrete quanta. However, with the development of quantum mechanics as a description of atomistic physics in the 1920s, the primitive and unphysical idea of light as a stream of particles was turned into a trademark of modern physics of highest insight.
The standpoint today is that light is both particle and wave, and the physicist is free to choose the description which best serves a given problem. In particular, the particle description is supposed to serve well to explain the physics of both blackbody radiation and photoelectricity. But since the particle description is primitive and unphysical, there must be something fishy about the idea that emission of radiation from a heated body results from emission of individual photons from individual atoms together forming a stream of photons leaving the body. We will return to the primitivism of this view after a study of the more educated idea of light as an (electromagnetic) wave phenomenon.
This more educated view is presented on Computational Blackbody Radiation with the following basic message:
1. Radiation is a collective phenomenon generated from in-phase oscillations of atoms in a structured web of atoms synchronized by resonance.
2. A radiating web of atoms acts like a system of tuning forks which tend to vibrate in phase as a result of resonance by acoustic waves. A radiating web of atoms acts like a swarm of cikadas singing in phase.
3. A radiating body has a high-frequency cut-off scaling with temperature of the form $\nu > \frac{T}{\hat h}$ with $\hat h = 4.8 \times 10^{-11}\, Ks$,where $\nu$ is frequency and $T$ temperature in degree Kelvin $K$, which translates to a wave-length $\lambda < \hat h\frac{c}{T}\, m$ as smallest correlation length for synchronization, where $c\, m/s$ is the speed of light. For $T =1500 K$ we get $\lambda \approx 10^{-5}\ m$ which is about 20 times the wave length of visible light.
We can now understand that the particle view is primitive because it is unable to explain that the outgoing radiation consists of electromagnetic waves which are in-phase. If single atoms are emitting single photons there is no mechanism ensuring that corresponding particles/waves are in-phase, and so a most essential element is missing.
The analysis of Computational Blackbody Radiation shows that an ideal blackbody is characterized as a body which is (i) not reflecting and (ii) has a maximal high frequency cut-off. It is observed that the emission from a hole in a cavity with graphite walls is a realization of a blackbody. This fact can be understood as an effect of the regular surface structure of graphite supporting collective atom oscillations synchronized by resonance on an atomic surface web of smallest mesh size $\sim 10^{-9}$. |
68c39e32c6eb0fee | måndag 19 december 2016
New Quantum Mechanics 21: Micro as Macro
The new quantum mechanics as realQM explored in this sequence of posts offers a model for the microscopic physics of atoms which is of the same form as the classical continuum mechanical models of macroscopic physics such as Maxwell's equations for electro-magnetics, Navier's equations for solid mechanics and Navier-Stokes equations for fluid mechanics in terms of deterministic field variables depending on a common 3d space coordinate and time.
realQM thus describes an atom with $N$ electrons realQM as a nonlinear system of partial differential equations in $N$ electronic wave functions depending on a common 3d space coordinate and time.
On the other hand, the standard model of quantum mechanics, referred to as stdQM, is Schrödinger's equation as a linear partial differential equation for a probabilistic wave function in $3N$ spatial coordinates and time for an atom with $N$ electrons.
With realQM the mathematical models for macroscopic and microscopic physics thus have the same form and the understanding of physics can then take the same form. Microphysics can then be understood to the same extent as macrophysics.
On the other hand, the understanding of microphysics according to stdQM is viewed to be fundamentally different from that of macroscopic physics, which effectively means that stdQM is not understood at all, as acknowledged by all prominent physicists.
As an example of the confusion on difference, consider what is commonly viewed to be a basic property of stdQM, namely that there is limit to the accuracy that both position and velocity can be determined on atomic scales, as expressed in Heisenberg's Uncertainty Principle (HUP).
This feature of stdQM is compared with the situation in macroscopic physics, where the claim is that both position and velocity can be determined to arbitrary precision, thus making the case that microphysics and microphysics are fundamentally different.
But the position of a macroscopic body cannot be precisely determined by one point coordinate, since a macroscopic body is extended in space and thus occupies many points in space. No one single point determines the position of and extended body. There is thus also a Macroscopic Uncertainty Principle (MUP).
The argument is then that if the macroscopic body is a pointlike particle, then both its position and velocity can have precise values and thus there is no MUP. But a pointlike body is not a macroscopic body and so the argument lacks logic.
The idea supported by stdQM that the microscopic world is so fundamentally different from the macroscopic world that it can never be understood, thus may well lack logic. If so that could open to understanding of microscopic physics for human beings with experience from macroscopic physics.
If you think that there is little need of making sense of stdQM, recall Feynman's testimony:
• We have always had a great deal of difficulty understanding the world view that quantum mechanics represents. At least I do, because I’m an old enough man that I haven’t got to the point that this stuff is obvious to me. Okay, I still get nervous with it ... You know how it always is: every new idea, it takes a generation or two until it becomes obvious that there’s no real problem. I cannot define the real problem, therefore I suspect that there is no real problem, but I’m not sure there’s no real problem. (Int. J. Theoret. Phys. 21, 471 (1982).)
It is total confusion, if it is totally unclear if there is a problem or no problem and it is totally clear that nobody understands stdQM....
Recall that stdQM is based on a linear multi-dimensional Schrödinger equation, which is simply picked from the sky using black magic ad hoc formalism, which could be anything, and is then taken as a revelation about real physics when interpreted by reversing the black magics.
This is like scribbling down a sign/equation at random without intentional meaning, and then giving the sign/equation an interpretation as if it had an original meaning, which may well be meaningless, instead of expressing a meaning in a sign/equation to discover consequences and deeper meaning.
fredag 16 december 2016
New Quantum Mechanics 20: Shell Structure
Further computational exploration of realQM supports the following electronic shell structure of an atom:
Electrons are partitioned into an increasing sequence of main spherical shells $S_1$, $S_2$,..,$S_M$ with each main shell $S_m$ subdivided into two half-spherical shells each of which for $m>2$ is divided into two angular directions into $m\times m$ electron domains thus with a total of $2m^2$ electrons in each full shell $S_m$. The case $m=2$ is special with the main shell divided radially into two subshells which are each divided into half-spherical subshells each of which is finally divided azimuthally, into $2\times 2$ electron domains for $S_2$ subshell, thus with a total of $2m^2$ electrons in each main shell $S_m$ when fully filled, for $m=1,...,M$, see figs below.
This gives the familiar sequence 2, 8, 18, 32,.. as the number of electrons in each main shell.
4 subshell of S_2
8 shell as variant of full S_2 shell
9=3x3 halfshell of S_3
The electron structure can thus be described as follows with parenthesis around main shells and radial subshell partition within parenthesis:
• (2)+(4+4)
• (2)+(4+4)+(2)
• ...
• (2)+(4+4)+(4+4)
• (2)+(4+4)+(8)+(2)
• ....
• (2)+(4+4)+(18)+(2)
• ...
• (2)+(4+4)+(18)+(8)
Below we show computed ground state energies assuming full spherical symmetry with a radial resolution of 1000 mesh points, where the electrons in each subshell are homogenised azimuthally, with the electron subshell structure indicated and table values in parenthesis. Notice that the 8 main shell structure is repeated so that in particular Argon with 18 electrons has the form 2+(4+4)+(4+4):
Lithium (2)+1: -7.55 (-7.48) 1st ionisation: (0.2)
Beryllium (2)+(2): -15.14 (-14.57) 1st ionisation: 0.5 (0.35)
Boron (2)+(2+1): -25.3 (-24.53) 1st ionisation: 0.2 (0.3)
Carbon (2)+(2+2): -38.2 (-37.7) 1st ionisation 0.5 (0.4)
Nitrogen (2)+(3+2): -55.3 (-54.4) 1st ionisation 0.5 (0.5)
Oxygen (2)+(3+3): -75.5 (-74.8) 1st ionisation 0.5 (0.5)
Fluorine (2)+(3+4): -99.9 (-99.5) 1st ionisation 0.5 (0.65)
Neon (2)+(4+4): -132.4 (-128.5 ) 1st ionisation 0.6 (0.8)
Sodium (2)+(4+4)+(1): -165 (-162)
Magnesium (2)+(4+4)+(2): -202 (-200)
Aluminium (2)+(4+4)+(2+1): -244 (-243)
Silicon (2)+(4+4)+(2+2): -291 (-290)
Phosphorus (2)+(4+4)+(3+2): -340 (-340)
Sulphur (2)+(4+4)+(4+2): -397 (-399)
Chlorine (2)+(4+4)+(3+4): -457 (-461)
Argon: (2)+(4+4)+(4+4): -523 (-526)
Calcium: (2)+(4+4)+(8)+(2): -670 (-680)
Titanium: (2)+(4+4)+(10)+(2): -848 (-853)
Chromium: (2)+(4+4)+(12)+(2): -1039 (-1050)
Iron: (2)+(4+4)+(14)+(2): -1260 (-1272)
Nickel: (2)+(4+4)+(16)+(2): -1516 (-1520)
Zinc: (2)+(4+4)+(18)+(2): -1773 (-1795)
Germanium: (2)+(4+4)+(18)+(2+2): -2089 (-2097)
Selenium: (2)+(4+4)+(18)+(4+2):- 2416 (-2428)
Krypton: (2)+(4+4)+(18)+(4+4): -2766 (-2788)
Xenon: (2)+(4+4)+(18)+(18)+(4+4): -7355 (-7438)
Radon: (2)+(4+4)+(18)+(32)+(18)+(4+4): -22800 (-23560)
We see good agreement even with the crude approximation of azimuthal homogenisation used in the computations.
To see the effect of the subshell structure we compare Neon: (2)+(4+4) with Neon: (2)+(8) without the (4+4) subshell structure, which has a ground state energy of -153, which is much smaller than the observed -128.5. We conclude that somehow the (4+4) subdivision of the second is preferred before a subdivision without subshells. The difference between (8) and (4+4) is the homogeneous Neumann condition acting between subshells, tending to increase the width of the shell and thus increase the energy.
The deeper reason for this preference remains to describe, but the intuition suggests that it relates to the shape or size of the domain occupied by an electron. With subshells electron domains are obtained by subdivision in both radial and azimuthal direction, while without subshells there is only azimuthal/angular subdivision of each shell.
We observe that ionisation energies, which are of similar size in different shells, become increasingly small as compared to ground state energies, and thus are delicate to compute as the difference between the ground state energies of atom and ion.
Here are sample outputs for Boron and Magnesium as functions of distance $r$ from the kernel along the horizontal axis :
We observe that the red curve depicting shell charge $\psi^2(r)r^2dr$ per shell radius increment $dr$, is roughly constant in radius $r$, as a possible emergent design principle. More precisely, $\psi (r)\sim \sqrt{Z}/r$ mathches with $d_m\sim m^2/Z$ and $r_m\sim m^3/Z$ with $d_m$ the width of shell $S_m$ and thus the width of the subshells of $S_m$ scaling with $m/Z$, and thus the width of electrons in $S_m$ scaling with $m/Z$.
We thus have $\sum_mm^2\sim M^3\sim Z$ and with $d_m\sim m^2/Z$ the atomic radius $\sum_md_m\sim M^3/Z\sim 1$ is basically the same for all atoms, in accordance with observation.
Further, the kernel potential energy and thus the total energy in $S_m$ scales with $Z^2/m$ and the total energy by summation over shells scales with $\log(M)Z^2\sim \log(Z)Z^2$, in close correspondence with $Z^{\frac{1}{3}}Z^2$ by density functional theory.
Recall that the electron configuration of stdQM is based on the eigen-functions for Schrödinger's equation for the Hydrogen atom with one electron, while as we have seen that of realQM rather relates to spatial partitioning. Of course, eigen-functions express some form of partitioning, and so there is a connection, but the basic problem may concern partitioning of many electrons rather than eigen-functions for one electron.
torsdag 8 december 2016
Quantum Mechanics as Theory Still Without Meaning
Yet another poll (with earlier polls in references) shows that physicists still today after 100 years of deep thinking and fierce debate show little agreement about the stature of quantum mechanics as the prime scientific advancement of modern physics.
The different polls indicate that less than 50% of all physicists today adhere to the Copenhagen Interpretation, as the main text book interpretation of quantum mechanics. This means that quantum mechanics today after 100 years of fruitless search for a common interpretation, remains a mystery without meaning. Theory without interpretation has no meaning and science without meaning cannot be real science.
If only 50% of physicists would agree on the meaning of the basic text book theories of classical physics embodied in Newton/Lagranges equations of motion, Navier's equation for solid mechanics, Navier-Stokes equations for fluid dynamics and Maxwell's equations for electromagnetic, that would signify a total collapse of classical physics as science and subject of academic study.
But this not so: classical physics is the role model of science because there is virtually no disagreement on the formulation and meaning of these basic equations.
But the polls show that there is no agreement on the role and meaning of Schrödinger's equation as the basis of quantum mechanics, and physicists do not seem to believe this will ever change. This is far from satisfactory from scientific point of view.
This is my motivation to search for a meaningful quantum mechanics in the form of realQM presented in recent posts. Of course you may say that for many reasons my chances of finding some meaning are very small, but science without meaning cannot be real science.
PS Lubos Motl, as a strong proponent of a textbook all-settled Copenhagen interpretation defined by himself, reacts to the polls with
• The foundations of quantum mechanics were fully built in the 1920s, mostly in 1925 or at most 1926, and by 1930, all the universal rules of the theory took their present form...as the Copenhagen interpretation. If you subtract all these rules, all this "interpretation", you will be left with no physical theory whatsoever. At most, you will be left with some mathematics – but pure mathematics can say nothing about the world around us or our perceptions.
• In virtually all questions, the more correct answers attracted visibly greater fractions of physicists than the wrong answers.
Lubos claims that more correct views, with the true correct views carried by only Lubos himself, gathers a greater fraction than less correct views, and so everything is ok from Lubos point of view. But is greater fraction sufficient from scientific point of view, as if scientific truth is to be decided by democratic voting? Shouldn't Lobos ask for 99.9% adherence to his one and only correct view? If physics is to keep its position as the king science?
Or is modern physics instead to be viewed as the root of modernity through a collapse of classical ideals of rationality, objectivity and causality? |
02f1e0ab209c89a7 |
Particles' wave functions always spread superluminally
By these comments, Jacques says that he is ignorant about many things that I (and my instructors) considered basics of quantum field theory since I was an undergraduate, such as:
1. The special theory of relativity and quantum mechanics are consistent but their combination is constraining and has some unavoidable consequences – some basic general properties of quantum field theories.
2. Consistent relativistic quantum mechanical theories guarantee that objects capable of emitting a particle are necessarily able to absorb them as well, and vice versa.
3. For particles that are charged in any way, the existence of antiparticles becomes an unavoidable consequence of relativity and quantum mechanics.
4. Probabilities of processes (e.g. cross sections) that involve these antiparticles are guaranteed to be linked to probabilities involving the original particles via crossing symmetry or its generalizations.
5. The pair production of particles and antiparticles becomes certain when energy \(E\gg m\) is available or when fields are squeezed at distances \(\ell \ll 1/m\) (much) shorter than the Compton wavelength.
6. Only observables constructed from quantum fields may be attributed to regions of the Minkowski spacetime so that they're independent from each other at spacelike separations (because they commute or anticommute).
7. Wave functions that are functions of "positions of particles" unavoidably allow propagation that exceeds the speed of light and there can't be any equation that bans it. The causal propagation only applies to quantum fields (the observables), not to wave functions of particles' positions.
8. Equivalently, almost all trajectories of particles that contribute to the Feynman path integral are superluminal and non-differentiable almost everywhere and this fact can't be avoided by any relativistic version of the mathematical expressions. Causality is only obtained by a combination of emission and absorption, contributions from particles and antiparticles, and at the level of quantum fields (observables).
It's a lot of basic stuff that Jacques should know but instead, he doesn't know it and these insight drive him up the wall. Let's look at those things.
The most well-defined disagreement is about the "relativistically corrected" Schrödinger equation\[
\] You see that it's like the usual one-particle equation except that the non-relativistic formula for the kinetic energy, \(E=|\vec p|^2/2m\), is replaced by the relativistic one, \(E=\sqrt{|\vec p|^2+m^2}\), with the same Laplacian (times \(-\hbar^2\)) substituted for \(|\vec p|^2\).
Jacques believes that when you substitute a localized wave packet for \(\psi(x,y,z)\) at \(t=0\) and you wait for time \(t'\), it will only spread to the ball of radius \(t'\) away from the original region: it will never propagate superluminally. Search for "superluminally" in his blog post and comments. Oops, it's wrong and embarrassingly wrong.
I think that the simplest way to see why he's wrong is to realize that the equation above still has the usual non-relativistic limit. As long as you guarantee that \(|\vec p| \ll m\) in the \(c=\hbar=1\) units, the evolution of the wave packets must be well approximated by non-relativistic physics and the non-relativistic Schrödinger equation.
Consider an actual electron moving around a nucleus. In the hydrogen atom, the motion is basically non-relativistic. Consider an initial localized wave packet for the electron that has a uniform phase, is much larger than the Compton wavelength \(\hbar/mc\approx 2.4\times 10^{-12}\,{\rm m}\) (it's simply \(1/m\) in the \(c=\hbar=1\) units) but still smaller than the radius of the atom. For example, the radius of the packet is \(10^{-11}\) meters. Outside a sphere of this radius, the wave function is zero.
Will this wave packet spread superluminally? You bet. By construction, the average speed is about an order of magnitude lower than the speed of light which is reasonably non-relativistic. So with a 1% accuracy (squared speed), and aside from the irrelevant phase linked to the additional additive shift \(E_0=mc^2\) to the energy, the wave packet will spread like if it followed the non-relativistic Schrödinger equation\[
i\hbar\frac{\partial}{\partial t} \psi = -\hbar^2\frac{\Delta}{2m} \psi + V(x) \psi
\] Let's set \(V(x)=0\). OK, how do the wave packets spread according to the ordinary Schrödinger equation? Let's ask Ron Maimon – every good self-didact is enough to answer such questions. Well, it's simple: the Schrödinger equation is just a diffusion (or heat) equation where the main parameter is imaginary. If \(m\) above were imaginary, \(m=i\mu\), then the solution to the diffusion equation would be\[
\rho(x,t)\equiv \psi(x,t) = \frac{\sqrt{\mu}}{\sqrt{2\pi t}} \exp(-\mu x^2/t)
\] The width of the Gaussian packet goes like \(\Delta x\sim \sqrt{t/\mu}\). It's very simple.
If you know the graph of the square root, you must know that the speed is initially very high. The speed \(dx/dt\) scales like the derivative of the square root of time, i.e. as \(1/\sqrt{t\mu}\). For times shorter than \(1/\mu\), the speed with which the wave packet spreads unavoidably exceeds the speed of light. It's kosher that we're looking at timescales shorter than the "Compton time scale" of the electron. We only assumed that the spatial size of the wave packet is longer than the Compton wavelength. Whether an analogous scaling is obeyed by the dependence on time depends on the equation itself and the answer is clearly No. The asymmetric treatment of space and time in the equation (the square root is only used for the spatial derivatives) may be partly blamed for that asymmetry.
Just to be sure, all the scalings are the same for the value of \(\mu=-im\) that is imaginary.
If you don't feel sure that our non-relativistic approximation was adequate for the question, I can give you a stronger weapon: the exact solution of the equation (Schrödinger's equation with the square root). What is it? Well, it's nothing else than the retarded Green's function – as taught in the context of the quantum Klein-Gordon field. Look e.g. at Page 7 of these lectures by Gonsalves in Buffalo.
The retarded function is the matrix element of the evolution operator for the one-particle Hilbert space\[
G_{\rm ret}(x-x') = \bra{x,y,z} \exp(H(t-t')/i) \ket{x',y',z'}.
\] When the particle is initially (a delta function) at the position \((x',y',z')\) at time \(t'\) and you wait for time \(t-t'\) i.e. you evolve it by the square-root-based Hamiltonian up to the moment \(t'\), and you ask what will be the amplitude at the position \((x,y,z)\), the answer is nothing else than the retarded Green's function of the difference between the two four-vectors.
Can the retarded Green's functions be analytically calculated? As long as you include Bessel functions among your "analytically allowed tools", the answer is Yes. If we set the four-vector \(x'=0\) to zero, the retarded Green's function is simply\[
G_{\rm ret}(x) = \theta(t) \zzav{ \frac{ \delta( x^\mu x_\mu ) }{2\pi} - \frac{m}{4\pi}J_1 (mx^\mu x_\mu ) }
\] For small and large timelike or spacelike separation, the Bessel function of the first kind used in the expression asymptotically is an odd function of the argument and behaves as (the sign is OK for positive arguments)\[
J_n(z) \sim \left\{ \begin{array}{cc} \frac{1}{n!} \zav{ \frac{z}{2} }^n&{\rm for}\,\, |z|\ll 1 \\
\sqrt{\frac{2}{\pi z}} \cos\zav{ z- \frac{(2n+1)\pi}{4} }
& {\rm for}\,\,|z|\gg 1 \end{array} \right.
\] But another lesson of the calculation is that the Green's function is nonzero even for \(x^\mu x_\mu\) negative, i.e. spacelike separation – although it decreases roughly as \(\exp(-m|x|)\) over there if you redefine the normalization by the factor of \(2E\) in the momentum space (which is a non-local transformation in the position space). See the last displayed equation on page 2 of Gonsalves:
Relativistic Causality:
Quantum mechanics of a single relativistic free point particle is inconsistent with the principle of relativity that signals cannot travel faster than the speed of light. The probability amplitude for a particle of mass \(m\) to travel from position \({\bf r}_0\) to \({\bf r}\) in a time interval \(t\) is\[
U(t) = \bra{{\bf r}} e^{-iHt} \ket{{\bf r}_0} =
\bra{{\bf r}} e^{-i\sqrt{{\bf p}^2+m^2}t} \ket{{\bf r}_0}\sim\\
\sim \exp(-m\sqrt{{\rm r}^2-t^2}),\quad {\rm for}\,\,{\rm spacelike}\,\, {\rm r}^2\gt t^2
Gonsalves also quotes "particle creation and annihilation" and "spin-statistics connection" as the other two unavoidable consequences of a consistent union of quantum mechanics and special relativity. He refers you to Chapter 2 of Peskin-Schroeder to learn these things from a well-known source.
OK, you might ask, what's the right modification of the wave equation for one particle that guarantees that the wave packet never spreads luminally?
There is none. The condition that the packet never spreads superluminally would violate the uncertainty principle, a fundamental postulate of quantum mechanics.
Why is it so? I can give you a simple idea. If you compress the particle to a small region, \(\Delta x \ll 1/m\), much smaller than the Compton wavelength, the uncertainty principle unavoidably says \(\Delta p \gg m\), so the motion is ultrarelativistic. You could think that \(\Delta p\gg m\) or \(p\gg m\) is still consistent with \(v\leq 1\) but the evolved wave packets are unavoidably far from those that minimize the product of uncertainties and as the Bessel mathematics above shows, the piece in the spacelike region just can't exactly vanish, basically due to the non-local character of the operators.
Similar derivations could be made with the help of the Feynman path integral. The typical trajectories contributing to the Feynman propagator are superluminal and non-differentiable almost everywhere and this fact does hold even in the calculation of the propagators in quantum field theory, a relativistic theory. As I discussed in a blog post in 2012, the superluminal or non-differentiable nature of generic paths in the path integral is needed for Feynman's formalism to be compatible with the uncertainty principle. Recall that we have solved a paradox: the calculation of \(xp-px\) in the path integral should amount to the insertion of the classical integrand \(xp-px\) to the path integral but this classical insertion is zero. The paradox was resolved thanks to the generic paths' being non-differentiable: the time ordering of \(x(t)\) and \(p(t\pm \epsilon)\) mattered.
So does quantum field theory prevent you from sending signals to spacelike-separated regions? And how is it achieved?
Yes, quantum field theory perfectly prohibits any propagation of signals superluminally or over spacelike separations. It does so by using the quantum fields. Quantum fields such as \(\Phi(x,y,z,t)\) and functions of them and their derivatives are associated with spacetime points and they commute or anticommute with each other when the separation is spacelike.
The zero commutator means that you may measure them simultaneously – that the decision to measure one doesn't influence the other or that the order of the two measurements is inconsequential. Just to be sure, the previous sentence doesn't say that these spacelike-separated measurements are never correlated. They may be correlated but correlation doesn't mean causation. They're only correlated if the correlation (mathematically described as entanglement within quantum mechanics) follows from the previous contact of the two subsystems that have evolved or moved to the spacelike-separated points.
The point is that the outcomes themselves may be correlated but the human decisions – e.g. which polarization is measured on one photon – do not influence the statistics for the other photon itself at all. The existence of the "collapse" associated with the first measurement doesn't change the odds for the second measurement – although if you know the result into which the first measurement "collapsed", you must refine your predictions for the outcome of the second measurements because a correlation/entanglement could have been present. OK, how does this vanishing of the spacelike-separated commutators agree with the fact that the packets spread superluminally? On page 27 of Peskin-Schroeder, you may see that the "commutator Green's function" is a difference between two ordinary Green's functions and because those two are equal in the spacelike region, the value just cancels in the spacelike region.
But again, the Fourier transform of the ordinary propagator such as \(1/(p^2-m^2+i\epsilon)\) does not vanish in the spacelike regions of the 4-vector \(x^\mu\). It cannot vanish because this position space propagator knows about the correlation of fields at two points of space. And the fields in nearby, spacelike-separated points are correlated, of course (very likely to be almost equal), especially if they are closer than the Compton wavelength. You may view this correlation as a result of the escaping of high-momentum or high-energy quanta to infinity. Only low-momentum or low-energy quanta are left in the vacuum and its low-energy excitations – and because of the Fourier relationship of \(x\) and \(p\), this absence of high-energy quanta means that the quantum fields can't depend on the spatial coordinates too much.
You know, the message is that the ban on superluminal signals is compatible with quantum mechanics but the creation and annihilation of particles must be unavoidably allowed when you reconcile these two principles, special relativity and quantum mechanics. Jacques Distler believes that relativistic causality works even in "QFT truncated to the one-particle Hilbert space" which simply isn't right. He's really misunderstanding the key reason why quantum field theory was needed at all.
Try to calculate the expectation value of the commutator of two fields \(F(x)\) and \(G(y)\) at two spacelike-separated points \(x,y\). The fields \(F,G\) may be the Klein-Gordon \(\Phi\) itself or some bilinear constructed out of it, e.g. the component of a current \(J^0\) that Distler talks about at some point. Imagine that you're calculating this commutator. You first expand \(F,G\) in terms of \(\Phi\) and its derivatives. Then you insert the expansions of \(\Phi\) in terms of the creation and annihilation operators. And you know the expectation values of the type \(\bra 0 \Phi(x)\Phi(y) \ket 0\). When you time-order \(x,y\), it's just the usual propagator in the position space.
The precise calculation will depend on the operators you choose but a general point is true: There will be lots of individual terms that are nonzero for spacelike \(x-y\). Only if you sum all these terms – which will pick creation operators from \(F\) and annihilation operators from \(G\) and vice versa etc., you can achieve the cancellation.
In particular, if you consider the operators \(F,G \sim J^0\), those will contain terms of the type \(a^\dagger a\) as well as \(b^\dagger b\) for a field whose particles and antiparticles differ. Only if you include the correlators of from both particles and antiparticles matching between the points \(x,y\), you may get a cancellation of the commutator (its expectation value).
In other words, the fact that a quantum field is capable of both creating a particle and annihilating an antiparticle (which is the same for "real" fields) is absolutely vital for its ability to commute with spacelike-separated colleagues!
This insight may be formulated in yet another equivalent way. You just can't construct a localized – relativistically causally well-behaved – field operator at a given point that would only contain terms of a given creation-annihilation schematic type, e.g. only \(a^\dagger a\) but no \(b^\dagger b\), only \(a^\dagger\) but no \(b\), and so on. Any operator that has a well-defined "number of particles of each type that it creates or annihilates" is unavoidably "non-local" and can't exactly commute with its spacelike-separated counterparts!
If you wanted to study the truncation of the quantum field theory to a one-particle Hilbert space where the number of particles is \(N=1\), and the number of antiparticles (and all other particle species) is zero, then all "first-quantized" operators on your Hilbert space correspond to some combination of operators of the \(a_k^\dagger a_m\) form. You annihilate one particle and create one particle. But no such combination of operators may be strictly confined to a region so that it would commute with itself at spacelike-separation.
Students who have carefully done some basic calculations in quantum field theory know this fact from many "happy cancellations" that weren't obvious for some time. For example, consider the quantized electromagnetic field. Write the total energy as\[
H = \int d^3 x\,\frac{1}{2}\zav{B^2+ E^2},
\] i.e. the integral of the electric and magnetic energy density. Substitute \(\vec A\) and its derivatives for \(\vec B,\vec E\), and write \(A\) and its derivatives in terms of creation and annihilation operators for photons. So you will get terms of the form \(a^\dagger a\), \(aa\), and \(a^\dagger a^\dagger\). At the end, the total Hamiltonian only contains the terms of the \(a^\dagger a\) "mixed" type but this simplified form is only obtained once you integrate over \(\int d^3 x\) which makes the terms \(a a\) and \(a^\dagger a^\dagger\) vanish because of their oscillating dependence on \(x\). If you only write the energy density itself, it will unavoidably contain the operators of the type \(aa\) and \(a^\dagger a^\dagger\) – annihilating or creating two photons – too. And the terms of all these forms are equally important for the quantum field to be well-behaved, especially for the vanishing of its commutators at spacelike separations.
The broader lesson is that important principles of physics are ultimately reconcilable but the reconciliation is often non-trivial and implies insights, principles, and processes that didn't seem to unavoidably follow from the principles separately. So the combination of relativity and quantum mechanics implies the basic phenomena of quantum field theory – antiparticles, pair production, the inseparability of creation and annihilation, spin-statistics relations, and a few other things.
In the same way, perhaps a more extreme one, the unification of quantum mechanics and general relativity is possible but any consistent theory obeying both principles has to respect some qualitative features we know from quantum gravity – as exemplified by string theory, probably the only possible precise definition of a consistent theory of quantum gravity. In particular, black holes must carry a finite entropy, be practically indistinguishable from heavy particle species, and such heavy particle species must exist. The processes around black holes and those involving elementary particles are unavoidably linked by some UV-IR relationships and string theory's modular invariance is the most explicit known example (or toy model?) of such relationships.
In combination, the known important principles of physics are far more constraining than the principles are separately and they imply that the "kind of a theory we need" or even "the precise theory" is basically unique. This strictness is ultimately good news. If it didn't exist, we would be drowning in the infinite field of possibilities. Because of the "bonus" strictness resulting from the combination of important principles of physics, we know that a theory combining quantum mechanics and special relativity must work like quantum field theory and a theory that also respects gravity as in general relativity has to be string/M-theory.
Add to Digg this Add to reddit
snail feedback (0) : |
eec83ce06ae42610 | Magnetism: mathematical aspects
From Scholarpedia
Vieri Mastropietro and Daniel C. Mattis (2010), Scholarpedia, 5(7):10316. doi:10.4249/scholarpedia.10316 revision #91450 [link to/cite this article]
Jump to: navigation, search
Already as an infant, Albert Einstein (Nobel prize winner, 1921) wondered about the physics of magnetic “action at a distance”. His was not the only brilliant mind entranced by magnetic phenomena. Among other Nobelists who have made incidental or even major contributions to our understanding of this field we note (along with the year of their prize) the names of H. Lorentz and P. Zeeman (1902), P. Curie (1903), W. Heisenberg (1932), W. Pauli (1945), F. Bloch (1952), C.N. Yang (1957), H. Bethe (1967), L. Néel (1970) and P.W. Anderson (1977) (Levinovitz & Ringertz, Eds., 2001). The list grows longer if we include the ancillary topics of magnetic resonance, Hall effect, and superconductivity or developments in unrelated fields, such as the concept of the Goldstone mode in high-energy physics inspired by the spin waves of the theory of magnetism. It is not a coïncidence that in "statistical mechanics", which comprises the study of physics at finite temperature, contemporary concerns such as phase transitions and their critical exponents evolved out of the corresponding microscopic properties of magnetic materials at the Curie point. In short, much of the mathematics that was originally developed to unveil the sources of magnetism found subsequent applications in other branches of theoretical physics and – in the case of the Ising model – far afield in cryptology, epidemiology, economics, political science and even sociology!
In this article we show how theories of magnetism are classified according to their internal and external symmetries, spatial dimensionality and various other physical properties. But from the outset, it is important for the reader to understand that we are only seeking to describe the material origins of magnetic phenomena, insofar as they might originate in the many-body comportment of electrons in atoms, molecules and solids. This is quite distinct from studies of the resulting electromagnetic field, whether this last is treated in the classical version due to Maxwell, dating back to the mid-19th Century, or in the quantized (QED) version developed in the middle of the 20th‚ the more so when we turn to antiferromagnets in which the concomitant magnetic fields cancel already at a microscopic level (Mattis, 2006).
What we present below is a small survey of a few idealized models of magnetism culled from an incredibly large array; we discuss the motivation behind them together with the interesting mathematics that arises in the course of solving these many-body problems(Mattis, 2006; Chaps. 3-9).
Effects of spatial dimensionality and of various symmetries
Setting aside Dirac's hypothetical magnetic monopole (Dirac, 1931) and the fragile current loop of Ampère, the leading physical source (and the unit) of magnetism has to be the permanent magnetic dipole‚ such as the elementary "Bohr magneton" carried by each and every electron (Mattis, 2006; p31). It is sometimes useful to idealize magnetic materials as periodic arrays of unit cells each of which contains one or more atoms or molecules sporting one or more Bohr magnetons.
As example of this periodicity imagine a family of "hypercubic solids" in arbitrary dimensions, ranging from \(d= 0\) (just 1 cell), to \(N\) cells in \(d \ge 1\ ,\) and up to unphysically high dimensions \(d \gg 1\ .\) Each cell has \(2d\) nearest-neighbors and \(N\) is large. The lattices are called the linear chain (\(lc\)) in 1D, the square (or simple quadratic \(sq\)) in 2D, the simple cubic (\(sc\)) in 3D, etc. In the \(lc\ ,\) identical cells lie along a given axis at points \(R_n = an\) with \(n= 1,2,N\) their label and \(a\) the lattice parameter. Thus \(d=1\) applies to an hypothetical, ideal, polymer of great length. Cells of the \(sq\) lattice are located at \(R_{n,m} = a (n,m)\) and those of the \(sc\) lattice are at \(R_{n,m,l} = a(n,m,l)\ ,\) etc.
It is found that the leading term of various correlation functions of particles confined to such lattices, when expanded in powers of \(1/d\ ,\) yield formulas identical to those obtained in the self-consistent "mean-field", aka "molecular field" approximation. "Critical properties", that is, the singular contributions to thermodynamic functions (specific heat, susceptibility, etc.) near a phase transition, can sometimes be extrapolated to 3D from \(d = 4\) using the renormalization group (RG) rigged for \(d = 4- \epsilon \)‚ upon expanding in powers of \(\epsilon \ .\) This has justified the study of leading large \(d\) approximations, even though physical limitations restrict magnetic systems as well as other materials to \(d \le 3\ .\)
To further define a physical model it is necessary to specify the contents of each cell and the interactions among neighboring cells. For many models it is possible to solve the multi-cell model in \(d\) dimensions (or at the very least, to analyze its thermodynamical properties) by a "transfer matrix approach" on a \(d -1\) dimensional lattice. More precisely, the partition function in d dimensions is related to the largest eigenvalue of a transfer operator on a lattice in \(d -1\) dimensions. The eigenvalue problem that needs be solved is usually trivial in \(d=1\) but is, with some exceptions, too cumbersome to carry out for \(d > 2\ .\)
On the other hand and at the opposite extreme, for \(d \ge 4\) or 5 or even greater (\(d \gg 1\)), the statistical mechanics and phase diagram of many models of magnetism become easily solvable and predictable in leading \(1/d\) approximation, just as is the case in quantum field theories.
Clearly, whether \(d\) is large or small is important. Just as single strand polymers differ from 3D solids even when the constituent atoms are identical, the properties of magnetic polymers differ from those of magnetic solids. Once \(d\) is specified, the dynamics of the individual magnetic moments (point-group symmetry) together with the symmetries of their interactions (bonds) are what determine the collective properties in the ground state and at finite \(T\ .\) These symmetries and dynamics fall into various classes or categories.
Typically, once the class or category of the model is given, it is in the three-dimensional world \(d = 3\) in which we live that it is most difficult to find precise or merely reliable mathematical solutions. That is what keeps theoretical physicists in business. As an example let us next examine arguably the simplest model of anisotropic nearest-neighbor interactions among quantized spins on lattices in various dimensions \(d\ ,\) the one named after E. Ising who first studied it in the 1920's (Mattis, 2006; Chap. 8).
The one-dimensional Ising model vs. other possibilities
Ising’s model was based in the “old” quantum theory in which spins, affixed to pounts on a space lattice, can only point “up” or “down.” This discrete algebra is denoted Z(2). In \(d=1\) the Hamiltonian is \(H = J \sum_n S_n S_{n+1}\) (where each \(S_n = \pm 1\)).
The Ising model is better described in terms of Pauli matrices \(\vec S = \frac{\hbar}{2} (\sigma_x, \sigma_y, \sigma_z)\) with \(J\) absorbing the factor \(\left ( \frac{\hbar}{2} \right )^2\ .\) Each spin at \(n\) is assumed to interact with nearest-neighbors at \(n \pm 1\) via a highly anisotropic \(3 X 3\) “exchange” matrix characterizing the nearest-neighbor bond, which then assumes the appearance\[-(\sigma_{x,n} \sigma_{y,n} \sigma_{z,n}). \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & J \end{pmatrix}. \begin{pmatrix} \sigma_{x,n+1} \\ \sigma_{y,n+1} \\ \sigma_{z,n+1} \end{pmatrix} \ .\] The total interaction with an external or self-consistently generated magnetic field is given by \(-B \sum_n \sigma_{z,n}\) if it is “longitudinal” or \(-B \sum_n \sigma_{x,n}\) if “transverse.” Also \(-(\sigma_{x,n} \sigma_{y,n} \sigma_{z,n}). \begin{pmatrix} J & 0 & 0 \\ 0 & J & 0 \\ 0 & 0 & J \end{pmatrix}. \begin{pmatrix} \sigma_{x,n+1} \\ \sigma_{y,n+1} \\ \sigma_{z,n+1} \end{pmatrix} \) is the Hamiltonian of an individual nearest-neighbor bond in Heisenberg’s model, with \(-B \sum_n \sigma_{z,n}\) the interaction with any orientation external field (as here, by symmetry, there can be no distinction between parallel and transverse.)
Finally, the interaction \(-(\sigma_{x,n} \sigma_{y,n} \sigma_{z,n}). \begin{pmatrix} J & 0 & 0 \\ 0 & J & 0 \\ 0 & 0 & 0 \end{pmatrix}. \begin{pmatrix} \sigma_{x,n+1} \\ \sigma_{y,n+1} \\ \sigma_{z,n+1} \end{pmatrix} \) describes the “X-Y” model in which we again need distinguish “in-plane” from “out-of-plane” external fields. These three models are discussed separately below.
The most general bilinear Hamiltonian for a bond connecting two sites is given by an Hermitean \(3 X 3\) matrix \(J_{\beta}^{\alpha}\ .\) This generalization has 9 independent parameters, with the Ising version in its most anisotropic limit, all the way to the totally isotropic Heisenberg model, \(J_{\beta}^{\alpha} = J \delta_{\beta}^{\alpha}\ ,\) representing isotropic limit, given that it is explicitly invariant under arbitrary spatial rotations. ( \(\delta_{\beta}^{\alpha}=1\) if \(\alpha = \beta\)and 0 otherwise is Kronecker’s delta.) Additionally, quartic forms for two-site interactions and interactions that involve three sites, have sometimes been considered in the literature in connection with various physical applications, but they are not discussed further here.
Ground states, elementary excitations and Tc
The spin \(1/2\) Ising \(lc\) of \(N\) sites connected by \(A-1\) (ferromagnetic) bonds \(J > 0\) was described above. The combined ground state (lowest energy) solution of the Hamilton or Schrödinger equation \(( H = E )\) is \(E_0 = -J(N-1)\) for either of two ground state configurations: all spins “up” (+1) or all “down” (–1). Either configuration manifests perfect long-range order (LRO).
The next higher energy levels are associated with a single “domain wall” defined as follows: the first \(q\) spins are all parallel, say “up,” followed by spins numbered \(q+1,\ldots, N\ ,\) that are all “down.” Because the \(q^{th}\) bond is promoted from energy \(-J\) to energy \(+ J\ ,\) the energetic cost of this defect is \(+2J\ .\) There are \(N-1\) possible values of \(q\ ,\) that is, \(N-1\) distinct positions on which to place the break. A second break can occur at \(q + r\) (\(r \neq 0)\) where \(- q < r < N-q\ ;\) a third one at some third distinct place, etc. Because a bond can only be broken once a sort of “exclusion principle” prevents any two breaks from sharing the same bond. Thus it appears that the domain walls are fermions, unusual only in the sense that each such fermions adds a constant amount \(2J\) to the total energy – regardless how many others are present. The corresponding Boltzmann factor of each is \(e^{- \frac{2J}{k_B T}}\ .\) It follows that the entropy \(\mathcal{S}\) ( \(\mathcal{S}\) = Boltzmann’s constant natural logarithm of the number of allowed configurations) is the sum of \(k_B \log 2\) (recall: initially there were two configurations) and of \((N-1)k_B \log (1+e^{-2J/k_B T})\ .\) The free energy \(F = E_0 - T\mathcal{S}\) becomes:
\[\tag{1} F = E_0 - (N-1)k_B T \log (1+e^{-2J/k_B T})-k_B T \log 2\]
The last term can be neglected in the large \(N\) limit. From this one can infer that at temperature \(T\) the thermodynamic average of the number \(\mathcal{N}\) of domain walls is,
\[\tag{2} \mathcal{N}=(N-1) \times \frac{1}{e^{-2J/k_B T}+1}\]
in which the second factor is the “Fermi-Dirac distribution function” (being the average number of fermions that lives on each of the \(N-1\) bonds in thermal equilibrium at temperature \(T\ .\)) Both \(F\) and \(\mathcal{N}\) are analytic in \(T\) except in the limit \(T \to 0\ ,\) signaling there is an order-disorder thermodynamic phase transition at \(T=0\ .\) The model can be “solved” more formally and the same results obtained more directly following a nonlinear transformation to bond variables. Let \(s_1 \equiv S_1 (=\pm 1), s_2=s_1S_2 ,\ldots , s_n =s_{n-1}S_n\ ,\) … , each \(s_n=\pm 1\) independent of the others. Then \(H =-J \sum_{n-2}^N s_n\ .\) This is just an Hamiltonian of \(N-1\) noninteracting pseudo-spins in a pseudo external magnetic field \(J\ .\) Because \(s_1\) has two values but does not explicitly enter \(H\ ,\) each configuration \(\{ s_1; s_2, s_3 ,\ldots, s_N \}\) is two-fold degenerate. This accounts for \(k_B T \log 2\) in (1).
In \(d \geq 2\ ,\) Ising’s model exhibits a genuine phase transition; the second derivative of \(F\) is discontinuous at a finite temperature \(T\) identified as the “Curie point” \(T_c\ ,\) a quantity proportional to\(J\) that is also a function of \(d\ .\) Table I lists \(T_c\) to three decimal places in terms of \(z \equiv \sharp\) of nearest-neighbor sites on hypercubic lattices.
Table 1: Critical temperature of Ising ferromagnet on hypercubic lattices
Lattice type Coordination number z kTc/zJ
lc (d=1) 2 0
sq (d=2) 4 0.567
sc (d=3) 6 0.752
hypercubic (in d ≥ 4) 2d 1-0.596/d
More on Tc
The critical temperature of zero in \(d=1\) shown in Table I was obtained trivially. The nonvanishing values of \(T_c\) on any of the three standard lattices in \(d=2\) (the \(sq\ ,\) honeycomb and triangular) can also be found exactly using the duality relations of Kramers and Wannier (Kramers & Wannier, 1941). Duality is what relates \(K=J/k_B T\) on a spin lattice to \(K^{*}=J/k_B T^{*}\) on a dual lattice that is constructed on the bonds of said lattice. The \(sq\) lattice is self-dual with coordination number \(z=4\) and the triangular and honeycomb lattices are duals of each other with \(z=6\) and 3 respectively. The duality relations \(\tanh K^{*}=e^{-2K}\) and \(\tanh K=e^{-2K^{*}}\) were originally derived by comparing high temperature series expansions of the partition function with low \(T\) expansions, without needing to actually evaluate either sum. Armed with just this sort of information, Onsager showed that \(T_c(z)\) is given by \(\sinh 2K_c= \tan \frac{\pi}{z}\) for Ising ferromagnets on any one of the 3 principal lattices in 2D.
There is no such formula for the values of \(T_c\) listed in Table I for the Ising ferromagnet in \(d=3\) and \(d \geq 4\) dimensions, so there the values of \(T_c\) have to be obtained numerically.
Symmetry breaking
The exact eigenvalue of the transfer matrix of the Ising model on a \(sq\) lattice, hence the exact evaluation of the partition function and of the free energy in this model, including the particulars of the singular phase transition (both specific heat and magnetic susceptibility diverge at \(T_c\ ,\)) were first obtained by L. Onsager (Onsager, 1944) in the early 1940’s by the use of spinor algebra. A subsequent version based on a more familiar fermion field theory was constructed by T.D. Schultz, D.C. Mattis and E.H. Lieb (Schultz, Mattis & Lieb, 1964) and will be sketched below. But it seems that some properties of these exact solutions generalize to all model ferromagnets in arbitrary dimensions, viz.:
Above \(T_c\) in zero external magnetic field, the “up-down” symmetry is maintained perfectly. When cooling below \(T_c\) this symmetry is spontaneously broken by the onset of LRO. It is noteworthy that the application of an homogeneous external magnetic field (B), which also breaks the symmetry of the ferromagnet, also creates LRO in any dimension \(d\ .\) It follows that a finite real external magnetic field pushes the critical temperature all the way up to \(T_c \to \infty\ .\)
Stated otherwise: regardless of the ground state and of the nature of the low-temperature phase of a magnetic substance(and regardless of spatial dimension \(d = 1, 2, 3, \ldots\ ,\)) in the presence of a finite, real, external field \(B \ne 0\) there will be created LRO at all finite \(T \geq 0\ .\) The only interesting question is, how does this magnetic order behave as function of \(\left\vert B \right\vert\) in the limit \(\left\vert B \right\vert \to 0\ ?\) If it remains finite in that limit, we have spontaneous ferromagnetism; if in that limit it vanishes, the information to be sought is in the magnetic susceptibility, a quantity related to the short-range order.
Transfer matrix in d = 1 Ising model
In Gibbsian statistical mechanics, the partition function \(Z\) is related to the free energy \(F\) by \(Z=e^{-\beta F}=Tr \{ e^{-\beta H} \}\ ,\) hence knowledge of the one yields the other. The temperature is given by \(\beta = 1/k_B T\ .\) The trace (abbreviated\[Tr\]) is defined as the sum over all diagonal elements of the argument, treated as a matrix. Because the number of such terms is exponential in \(N\) this sum cannot be performed efficiently in the limit \(N \to \infty\ ,\) especially near \(T_c\ .\) (Otherwise one could obtain, for example, the energy as a function of temperature \(\langle E \rangle = {\partial (\beta F) \over \partial \beta}\) and other thermodynamic quantities, numerically.) The following calculation, carried out explicitly for the Ising \(lc\ ,\) shows how to get around this difficulty in 1D.
In the 1D model with \(B=0\ ,\) the quantity inside the \(Tr \{ \}\) operation can be written as \(e^{-\beta H}=e^{\beta JS_1S_2} e^{\beta JS_2S_3} \ldots e^{\beta JS_nS_{n+1}}, \ldots\ ,\) upon ordering interactions consecutively. We note that each factor has exactly the same form, \(V = \begin{pmatrix} e^{\beta J} & e^{-\beta J} \\ e^{-\beta J} & e^{\beta J} \end{pmatrix} = e^{\beta J} \mathbf{l} +e^{-\beta J} \delta_x\ .\) (Here \(\mathbf{l}\) is the unit \(2 X 2\) matrix.) It follows that,
\[\tag{3} Tr \{ e^{-\beta H} \} = Tr \{ V \cdot V \cdot \ldots V \} = Tr \{ V^N \} = \lambda_1^n + \lambda_2^n\]
where a dot "\(\cdot\)" indicates ordinary matrix multiplication. The \(\lambda\)'s are the two eigenvalues of the \(2X2\) “transfer matrix” V, viz., \(\lambda_1 = 2 \cos{h \beta} J\) and \(\lambda_2 = 2 \sin{h \beta} J\ .\) Then, \(Z = \lambda_1^n+ \lambda_2^n = \lambda_1^n (1+ \left ( \frac{\lambda_2}{\lambda_1} \right )^N )\ .\) Thus only the larger eigenvalue survives, given that the contribution of the smaller one to the partition function (and to F) is exponentially smaller when we proceed to the thermodynamic limit \(N \to \infty\ .\)
The largest eigenvalue also acquires a special significance if we identify the spinor \((p_j, 1-p_j)\) as the probability of the jth spin being “up” (\(p_j\)) and “down” \((1-p_j)\ .\) With \(p_j\) in the interval 0,1 each of these probabilities is positive and they add up to 1. The corresponding probability of the \(j+n_{th}\) spin being “up” or “down” is then \((p_j, 1-p_j) \cdot V_n \propto (p_{j+n}, 1-p_{j+n}) \ .\) We present this as a proportionality and not as an equation because the sum of the probabilities at \(j + n\) also need to be normalized. (In probability theory, by normalization is meant that each entry is positive and the sum of all entries is unity.) Therefore the correct equation is:
\[\tag{4} (p_j, 1-p_j) \cdot V_n = z(n) (p_{j+n}, 1-p_{j+n}) \]
with \(z(n)\) to be determined. In the present example it is found that the right-hand vector tends to (½,½ ) asymptotically at large \(n\ ,\) regardless of the initial \(p_j\ .\) If the initial state belonged to the largest eigenvalue of \(V\ ,\) which is \(\lambda_1 = 2 \cos{h \beta} J\ ,\) then \(p_j\) = ½ and the preceding result holds for all \(n\) and not just asymptotically. It follows that the function \(z(n)\) is \(z^n\) and that \(Z = z^N\ .\) The other eigenvalue of \(V\ ,\) \(2 \sin{h \beta} J\ ,\) belongs to a spinor (½ , –½ ) that cannot be interpreted in terms of probabilities.
Thus, the evaluation of \(Z\) can be reduced to an ordinary eigenvalue problem subject to the following famous, if obvious, theorem:
Frobenius’ Theorem: “The largest eigenvalue of a matrix of arbitrary dimension, all elements of which are positive, belongs to an eigenvector that has only non-negative elements.”
For want of a better name we shall call this the “largest eigenvector”. Because all other eigenvectors must have one or more changes of sign in order to be orthogonal to the “largest eigenvector,” it is the only eigenvector that can be normalized according to the following rule: each of its entries must lie in the interval 0,1 and the sum of all its entries must = 1. Thus normalized, the “largest eigenvector” becomes the “reduced density matrix” and its entries are probabilities. Only this “largest eigenvector” is ever needed in the calculation of \(Z\) and of free energy F, whereas all eigenvectors are required in the evaluation of any nontrivial correlation function.
Application to Ising model on Sq lattice
In 2D statistical mechanics, the matrix “transferring” the \(n^{th}\) column of spins to \(n+1\) is a sort of “quantum” \(lc\ .\) The rows are labeled \(m=1,2, \dots ,M\ .\) All bonds \(-J' \sum_{m=1}^M S_{n,m} S_{n,m+1}\) that connect spins on rows \(m\) and \(m+1\) within a single vertical \(n^{th}\) column must be included. Combining these with horizontal transfers of the \(n_{th}\) column into the \(n+ 1^{st}\) we obtain a complete transfer operator \(V_n = \prod_m e^{\beta J' \sigma_{z,(n,m)} \sigma_{z,(n,m+1)}} (e^{\beta J} l_{(n,m)}+e^{-\beta J} \sigma_{x,(n,m)})\ .\) The horizontal contributions are given by the \(V\) defined just above Eq. (3). Because all references in this operator are to the \(n^{th}\) column we can omit the column index \(n\) for the sake of notational simplicity.
The second factor in the \(V\) shown above is exponentiated as follows\[(e^{\beta J} l_{m}+e^{-\beta J} \sigma_{x,m}) \equiv \sqrt{2 \sinh 2 \beta J} e^{K^* \sigma_{x,m}} \] after defining \(K= \beta J\ ,\) using a trivial Pauli operator identity, and defining \(\tanh K^* = exp -2K\) as before. After similarly defining \(K'= \beta J'\) for vertical bonds, we obtain the full 2D transfer matrix (all \(m\)) in the form:
\[\tag{5} W = C^M e^{K^* \sum_m \sigma_{x,m}} e^{K' \sum_m \sigma_{z,m} \sigma_{z,m+1}} \]
where \(C = \sqrt{2 \sinh 2K}\) (5A)
It “transfers” all the spins on the \(n^{th}\) column to \(n + 1\ .\) The exponent is congruent to a \(d=1\) Ising \(lc\) with nearest-neighbor bonds \(K'\) in a transverse magnetic field \(K^*\ ,\) an exactly solvable model. We therefore seek to solve the eigenvalue problem\[W \Psi = z \Psi\] for the largest possible value of \(z\ .\) This can be done following a sequence of simplifying transformations. The first of these is a global rotation about the y-axis by 90º, i.e., \(\sigma_{x,m} \Rightarrow \sigma_{z,m}\) and \(\sigma_{z,m} \Rightarrow - \sigma_{x,m}\ .\) After this (5)A) becomes:
\[W = C^M e^{K^* \sum_m \sigma_{z,m}} e^{K' \sum_m \sigma_{x,m} \sigma_{x,m+1}} \] (5B)
It is possible to express both the spin operators \(\sigma_{z,m} = 2 \sigma_m^+ \sigma_m^- -1 \) and \(\sigma_{x,m} = \sigma_m^+ + \sigma_m^- \) entirely in terms of the spinor raising/lowering operators, in such a way that the exponents are homogeneously quadratic in the \(\sigma^{\pm '} s\ .\) But because these operators are neither fermions (which anticommute) nor bosons (which commute), these exponents cannot be readily diagonalized. The eigenvalues of the exponentiated quadratic forms have no obvious significances.
In fact, the \(\sigma^{\pm '} s\) satisfy the following mixed commutation relations\[\tag{6} \sigma_j^+ \sigma_j^- + \sigma_j^- \sigma_j^+ = 1\ ,\]
whereas for \(j \ne l\ ,\) \(\sigma_j^+ \sigma_l^- + \sigma_j^- \sigma_l^+ = 0\ .\)
To proceed we make use of a highly nonlinear “Jordan-Wigner” transformation. Such a mapping of fermions onto spins was originally invented in the 1920’s to prove that it was mathematically possible to construct a fermionic field theory out of an array of spins ½. Here we invert the construction, expressing each spin by a fermion operator \(c\) that carries an exponential wake made up of “earlier” fermion operators. The algebra of the fermions is postulated to be pure anticommutation\[\tag{7} c_j^{\dagger} c_k + c_k c_j^{\dagger} \equiv \{ c_j^{\dagger}, c_k \} = \delta_j^k, \{ c_j^{\dagger} c_k^{\dagger} \} = \{ c_j, c_k \} = 0\]
We note the following trivial identities\[c_j e^{\pm i \pi c_j^{\dagger} c_j} = -c_j, e^{\pm i \pi c_j^{\dagger} c_j} c_j = + c_j, e^{\pm i \pi c_j^{\dagger} c_j} = -1\] and \(e^{\pm 2 i \pi c_j^{\dagger} c_j} = 1\ .\)
We construct the Pauli spin operators in (5)B) out of such fermion field operators.
\(\tag{8} \sigma_m^+ = e^{i \pi \displaystyle \sum_{j<m} c_j^{\dagger} c_j}\)
and \(\sigma_m^- = c_m e^{-i \pi \displaystyle \sum_{j<m} c_j^{\dagger} c_j}\ .\)
The reader will want to verify that this representation of the \(\sigma\) operators satisfies the mixed commutation relations in (6). Then, inserting (8) into (5)B) with the aid of the above identities yields the transfer operator as a product of two exponential forms, each quadratic in fermion operators. Thus, the eigenvalues of the quadratic forms in the following expressions are useful in the evaluation of \(Z\ .\)
\(\tag{9} W = C^M e^{K^* \sum_m (2 c_m^{\dagger} c_m -1)} e^{K' \sum_m (c_m^{\dagger} - c_m)(c_{m+1}^{\dagger} + c_{m+1})} \)
We expand the local operators \(c_m\) in plane waves, \(c_m = \sqrt{\frac{1}{M}} \sum_{k=-\pi}^{\pi} e^{ikm} a(k)\ .\) (For didactic reasons we have imposed periodic boundary conditions on the \(lc\) of fermions, setting \(c_{m+N}=c_m\ ,\) but with little additional effort solutions can be found for more general or even for arbitrary boundary conditions, or for periodic boundary conditions on the original spin operators.) Because the Fourier expansion takes the form of a unitary transformation it preserves the algebra. Therefore the a’s satisfy the same set of anticommutation relations as the \(c\)’s in Eq. (7), viz.,
\(a^{\dagger}(k)a(q)+a(q) a^{\dagger}(k) \equiv \{ a^{\dagger}(k), a(q) \} = \delta_{k,q}\) with all other anticommutators = 0.
By translational invariance this procedure breaks the transfer matrix up into \(N/2\) noninteracting sectors, each labeled by \(k\) and containing a form bilinear in fermions\[W = C^M e^{K^* \sum_k (2 a^{\dagger}(k)a(q)-1)} e^{K' \sum_k e^{-ik} ( a^{\dagger}(-k)-a(k) )( a^{\dagger}(k)+a(-k) )} = \prod_{k>0} W(k)\] (9B)
Each factor on the rhs takes the form,
\(W(k) \propto e^{2K^* \big( a^{\dagger}(k)a(k) + a^{\dagger}(-k)a(-k) \big) } e^{K' \big( e^{-ik}(a^{\dagger}(-k)-a(k))(a^{\dagger}(k)+a(-k) ) + e^{ik} (a^{\dagger}(k)-a(-k))(a^{\dagger}(-k)+a(k)) \big) }\)
Because factors in different k-sectors commute, the individual 4X4 \(W(k)\)’s can be diagonalized individually. We need to find the largest solution \(\lambda_k\) of the equation \(W(k) \Psi = \lambda_k \Psi\) in each separate k-sector. Their product yields the partition function \(Z\ .\) Consequently the free energy \(F\ ,\) the logarithm of \(Z\ ,\) is explicitly a sum which turns into an integral in \(\lim M \to \infty\ ,\)
\(F = -kTN \sum_{k=0}^{\pi} \log \lambda_k = -kT \frac{NM}{2 \pi} \int_{0}^{\pi} dk \log \lambda_k \)
The thermodynamic properties are obtained from \(F\) by successive differentiations. These derivatives of \(F/T\) can be calculated in closed form in terms of elliptic functions (Mattis, 2006). Note that \(F\) is (correctly) extensive (proportional to the area \(NM\ .\)) It is easily shown that the identical integral would have been obtained in the large \(NM\) limit, had we transferred the rows instead of the columns. So, even though the procedure might have seemed asymmetric, actually all symmetries are preserved.
Without going into details of the evaluation, we find the free energy \(F\) has singular derivatives at \(T_c\) (the actual value of \(T_c\) is easily calculated and agrees with the earlier estimates.) Above \(T_c\) there is no LRO and the magnetization is zero. At or below \(T_c\) there develops LRO because of a two-fold degeneracy of the ground state of the transfer operator. Correlations can be calculated with the aid of Toeplitz matrix theory. Both the magnetization and the LRO increase with decreasing \(T\) until, at \(T=0\ ,\) all spins are precisely parallel, all “up” or all “down”.
The two-dimensional transfer matrix of the \(d = 3\) dimensional Ising model can also be written in the form of Eq. (5)B) provided the column label \(m\) is replaced by a planar label \((n,m)\ ,\) with bonds to \((n \pm 1,m)\) and \((n,m \pm 1)\ .\) That is, the transfer matrix for the 3D Ising model can be mapped onto a two-dimensional Ising model in a perpendicular field. Because only the largest eigenvalue is needed in the calculation of \(Z\) or \(F\ ,\) a variational approximation is useful, as is the renormalization group (RG).
Unfortunately the Jordan-Wigner transformation itself fails to be of help, because the exponential tails fail to cancel for half the bonds; therefore the quadratic form in spins on a 2D plane cannot be transformed into a quadratic form in either fermions or bosons. The transfer operator can, however, be reexpressed as a quartic form in fermions (Mattis, 2006; Chap. 3, §3.12). Thus the 3D Ising model falls into the realm of problems (\(\Phi^4\) field theories) that are generally well understood yet have not yet found an exact mathematical solution outside of approximate or RG procedures.
Other seemingly simple model ferromagnets that remain unsolved at the present time include the 2D Ising model in a real, finite, external magnetic field (whether homogeneous or staggered,) as well as most three-dimensional models of any kind.
More symmetry considerations
In the above, the spontaneous breaking of discrete up/down symmetry in the ground state at \(T = 0\) was of no particular consequence. But what if the symmetry had been continuous? Let us consider spins that can point into any direction according to either the O(2) (circular) or O(3) (spherical) symmetries. Then the excitation spectrum above the ground state becomes gapless. Consider the following magnetic polymer (\(lc\)) whose dynamics are described by the following Hamiltonian:
\[\tag{10} H = -J \sum_{n=1}^{N-1} \vec S_n \cdot \vec S_{n+1}\]
in which we assume the individual spins are themselves classical two- or three-dimensional vectors of unit length (all \(\vec S_n^2 = 1\)) and not operators. The dot product (\(\cdot\)) ensures the bond energies are scalar under rotations. This \(H\) is known as the “classical” Heisenberg Hamiltonian if the spin vectors are three-dimensional or as the “classical” “X-Y” model if the spins are constrained to lie in the x,y plane. We distinguish the classical spins here from any of the quantum versions discussed supra, in which the components of the individual spins fail to commute.
(The distinction between ferromagnetism in the extreme quantum limit of s=½ and the classical models of ferromagnetism may, in fact, be academic (note), as all interesting properties, correlations, etc. are qualitatively independent of the magnitudes of the spins; not so, the difference between models with Z(2), O(2) and O(3) symmetries. Such rotational symmetries, and not the quantum mechanics, seem to be a determining factor in the thermodynamics.)
The classical O(2) X-Y model is also known as the “plane rotator” model, given that bonds connecting nearest-neighbor sites \(i,j\) take the form \(-J \cos (\vartheta_i - \vartheta_j)\) of rigid coupled pendulums.
On a \(d = 2\) lattice, neither the O(2) nor the O(3) model can sustain LRO at any \(T > 0\ .\) The lack of spontaneous symmetry breaking at any finite \(T\) in both models is the result of a rigorous no LRO theorem first proved by Hohenberg and later generalized by Mermin and Wagner (Mermin & Wagner, 1966) to all systems having a continuous symmetry in \(d \le 2\) spatial dimensions. This theorem clearly does not apply to the Ising model because of its discrete symmetry; nor does it address the issues of the existence or nonexistence of a phase transition at finite \(T\ .\)
In fact, the X-Y model (but not the Heisenberg model!) does exhibit an unusual phase transition on a two-dimensional lattice, at a finite \(T_{K-T}\) approximately equal to \(0.9 J/k_B\) separating two disordered phases. This, the well-known “Kosterlitz-Thouless” phase transition, is a two-dimensional version of a liquid <–> vapor transition. There are many ways to examine its critical properties.
In one of them (Mattis, 1984), the transfer matrix of the classical model is mapped onto a one-dimensional anisotropic Heisenberg \(lc\) of spins ½ in which the J-matrix takes the form \(\begin{pmatrix} 0.9 & 0 & 0 \\ 0 & 0.9 & 0 \\ 0 & 0 & g \end{pmatrix}\ ,\) with the effective parameter \(g\) varying as \(1/k_T\ .\) The critical point is thus at \(g=0.9\ .\) Internal excitations (here physically interpreted as the spectrum of quantized clockwise or anticlockwise vortices) are gapped (bound) at low temperatures but have a continuous spectrum above \(T_{K-T}\ .\)
On the same two-dimensional lattice, the O(3) model remains in its high-temperature phase at all finite \(T\) without undergoing any phase transition whatever. The cause is, presumably, the high density of low-energy hedgehog-like excitations called skyrmions that are allowed in this model but not in the other.
Both models do support a gapless spectrum of spin waves at low \(T\ .\) In dimensions \(d \ge 3\ ,\) both O(2) and O(3) models exhibit rather ordinary order-disorder second-order phase transitions at a finite \(T_c\ .\) Now let us examine some details in \(d=1\) as the simplest example.
Here again the ground state energy is the same \(E_0\) but the excitation spectrum can now be vanishingly small, as a small twist \(\phi\) in the orientation \(\vartheta\) of each spin relative to its neighbor (say, \(\vartheta_{n+1} = \vartheta_n + \phi\) with \(\vartheta_1=0\)) only costs an energy \(1/2 J \phi^2\) per bond. The imposition of any boundary conditions – say, requiring that the first and last spins be parallel – causes \(\phi\) to become discretized, e.g. \(\phi = 2n \pi/N\ ,\) with \(n\) an integer \(\le N\ .\) The ratio \(N/n = \lambda\) is related to the wavelength of the excitation, in units of the lattice parameter \(a\ .\) We call this excitation a spin wave; its energy forms a quasi-continuum and vanishes as \(n^2/N\ .\) The lack of an energy gap in the large \(N\) limit is a feature of many field theories with continuous symmetries, as was first remarked in the 1950’s by Nambu, Goldstone, et al, by analogy with the spin waves that Bloch found 20 years earlier. Where the “Goldstone mode” relates to the spin wave spectrum, the Goldstone boson relates to the “magnon”, which is an elementary bosonic particle that results from further quantization of spin dynamics. It carries 1 unit of angular momentum \(\hbar\ .\)
But just as there is an exception in particle physics, e.g. for the Higgs boson, there is one in magnetism for integer quantum spins in a 1D antiferromagnetic Heisenberg chain, where the lowest excitations have to surmount a “mass” gap. We turn to this interesting anomaly next.
Role of spin in d = 1 antiferromagnets
The antiferromagnetic Heisenberg Hamiltonian, i.e., the lc of Eq. (10) with \(J < 0\ ,\) in which spins are \(S = 1, 2, \ldots\) operators, as opposed to classical vectors considered earlier, has a magnon spectrum exhibiting a finite excitation gap even at the longest wavelengths. This is quite unlike the gapless spin wave spectrum \(\omega \propto k^2\) of the ferromagnet, \(J > 0\ ,\) and from the excitations of the S=½ antiferromagnet – the spectrum of which, \(\omega \propto \left | k \right |\ ,\) is calculated exactly by “Bethe’s ansatz” discussed at the end of this article. The spin wave excitations of \(S= 3/2, 5/2 \ldots lc\) antiferromagnets and of classical spin antiferromagnets also are all gapless. So what happens to integer spin operators of magnitude \(S(S+1) = 2, 6, 12, \ldots\ ,\) to make them that different?
We start with the proof that the excitation spectrum for all half-odd-integer spins \((1/2, 3/2, \ldots)\) is continuous and gapless, regardless whether the sign of \(J\) is positive or negative.
The proof in 1D is as follows: take the ground state wave function \(\Psi_0\) for a \(lc\) subject to periodic boundary conditions and operate on it by \(\Gamma = \prod_n \gamma_n \) in a way that distorts the \(n^{th}\) individual bond only by a small amount \(1/N\ ,\) thus each of the \(N\) bonds sees its energy rise in an amount \(O(1/N^2)\ .\) If \(\Gamma \Psi_0\) is orthogonal to \(\Psi_0\) we have constructed an excited state of total energy \(O(1/N)\) above the ground state. One may consider this as the variational calculation of the energy of a 1-magnon state. (The proof in \(d = 2\) and \(d = 3\) just extends the proof for the \(lc\) to finite-width strips or cylinders.) If the spins are higher but of the form half-odd-integer, an operator \(\Gamma\) having these properties is easily constructed. In the case of integer spins, however, the corresponding operator, when applied to the ground state wave function, fails to yield a state orthogonal to the ground state and the proof fails.
This energy gap in the spectrum of integer spin antiferromagnetic \(lc\)’s was first conjectured by D. Haldane and it is named after him; it turns out to exist only in 1D antiferromagnets and only if \(S\) is an integer\[S =1, 2 , 3, \ldots\ .\] The magnitude of Haldane’s gap goes to zero as \(S\) is increased. (This is expected if the model is to approach its gapless correspondence limit smoothly.) The gap also vanishes in any dimension \(d > 1\ .\) Thus we are dealing with a feature that is both interesting and fragile, which is optimum for spins \(S =1\) in \(d = 1\) and will be extensively revisited at the end of this article.
We can discern the dichotomy in a \(lc\) of as few as 3 spins \(S\) arrayed in a triangle. For 3 spins the Heisenberg Hamiltonian is diagonalizable\[H = \frac{-J}{2} \left[ T(T+1)-3S(S+1) \right]\ ,\] where \(T\) = the total combined spin. \(T\) has a maximum value 3/2 for spins S=½ and a maximum 3 for spins 1. In the ferromagnet (\(J > 0\)) the ground states for both these values of \(S\) belongs to their respective maxima and are similar in all aspects.
In the antiferromagnetic (\(J < 0\)) triangle of 3 spins, the ground states belong to a total spin minimum. The minima are \(T = 0\) for spins 1, and \(T\) = ½ for spins ½. After some back-of-the-envelope exact calculations, one determines that the ground state of the three spins ½ consist of two degenerate doublets, i.e., that it is 4-fold degenerate. Thus there is no energy gap separating the two lowest-lying states.
For the antiferromagnetic triangle made of spins \(S = 1\ ,\) however, the ground state belongs to a unique \(T = 0\) singlet state. All other eigenstates in this model lie at energies that are at least \(J\) higher so that here, the Haldane gap is \(J\ .\)
Exercise for the reader: contrast the eigenstates of \(S\)= ½ and \(S\)=1 Heisenberg antiferromagnetic chains of 4 spins when laid out on a single square plaquette, with or without diagonal linkages. (Either geometry can be done analytically in closed form.) The conclusions are quite similar. Discussion of the Haldane gap in the limit of large \(N\) is reprised near the end of this article. References to early and pertinent literature are given in Mattis, 2006.
The antiferromagnetic \(lc\) of spins \(s = 1\) exhibits an additional idiosyncracy at large \(N\ :\) the two ends, at \(n = 1\) and \(N\) respectively, act like free spins ½ in their response to external fields, as in paramagnetic resonance (EPR.) The characteristics of EPR allow one to determine the spin; the ends of a chain of spins 1 display spins ½ ! This surprising behavior was, in fact, predicted theoretically – and confirmed by experiment – almost simultaneously. Chains of \(S = 2\) spins would, presumably, have ends that exhibit the properties of spins 1, etc. The situation is similar to that in the defective antiferromagnets discussed below, given that dangling ends of the chain at \(n=1\) and \(N\) can be viewed as breaks in a longer chain, or as a symmetry-breaking disruption of translational invariance, caused by cutting the bond connecting the last spin at \(N\) to the first, in a chain with periodic boundary conditions.
No magnetism at finite T in 1D
In 1D one can also obtain the free energy of the classical O(3) Heisenberg model without separate calculations of energy and entropy, using a transfer matrix for the partition function \(Z = e^{-\beta F}\ .\) To within boundary terms its largest eigenvalue yields,
\[\tag{11} F=-(N-1)k_BT \log \Big( \frac{k_BT}{J} \sinh \frac{J}{k_BT} \Big) \]
an expression that translates to a (thermal averaged) angle between neighboring spins of \(< \left\vert \phi \right\vert > \approx \sqrt{\frac{k_BT}{J}}\) at low \(T\ .\) The correlations of two spins separated by a macroscopic distance \(na\) fall off \(\propto exp -n < \left\vert \phi \right\vert > \ .\) Thus, in this gapless one-dimensional model with continuous symmetry the ground state LRO disappears exponentially at any finite \(T > 0\ .\) Following the Mermin-Wagner theorem some such result was to be expected. It is also notable that many years earlier, L.D. Landau had already observed that no model with finite range interactions can sustain LRO in 1D at any finite \(T\ .\)
The nature of local moments
Given all these choices of models, what are the most physically plausible magnetic contents of a unit cell? Typically, spins and angular momenta that characterize an atom or ion disappear (are quenched) in solids. The earth elements are counter-examples, in that they have unfilled f-shells that can accommodate up to 7 electrons in an orbital that is somewhat smaller than typical inter-atomic distances and which are, therefore, not much disturbed by the crystal symmetry. A factor of 7 Bohr magnetons puts these spin magnitudes largely in the classical limit, such as was assumed in the classical Heisenberg model treated above.
Actually, the Hamiltonian in Heisenberg’s 1928 model of magnetism, or what is commonly understood today to be the Heisenberg Hamiltonian, is similar to Eq. (4) in form but uses operators for the components of individual vector spins. As we know, in quantum theory all angular momenta satisfy an algebra \(\vec S \times \vec S=i\hbar\vec S\) with 3 operator components \(\vec S = (S_x,S_y,S_z)\ .\) In the extreme quantum limit, for a single electron, the \(S\)’s have an irreducible representation in \(2 \times 2\) Pauli spin matrices that anticommute with one another but commute with operators at all other sites. For higher spins the irreducible representations are \(2s+1\) dimensional, where \(\vec S^2 = \hbar^2 s(s+1)\ ;\) the various components are operators that satisfy \(\left[ S_x,S_y \right] =2i\hbar S_z \ ,\) which is equivalent to the generic \(\vec S \times \vec S=i\hbar\vec S\ .\)
Despite the introduction of operators into the problem, the ground state of the s = ½ quantum ferromagnet (\(J > 0\)) remains simple\[\Psi_0 = \prod_j \begin{bmatrix} 0 \\ 1 \end{bmatrix}_j\ .\] The product form is maintained even for higher spins, \(\Psi_0 = \prod_j \psi_j^{(0)}\ .\) Low-lying excitations – spin waves – of the Heisenberg ferromagnet, were derived by F. Bloch from this explicit translational invariance. (The amplitude of an excitation has to be constant, therefore only the phase can advance. This is, by definition, a plane wave.) However, once there are many excitations present, as happens at finite \(T\ ,\) translational invariance is lost and the ensuing nonlinear problem becomes quite more difficult.
Nevertheless, in 1D, and only for spins ½, a complete set of solutions encompassing all compound excitations in the Heisenberg model was conjectured by H. Bethe (Bethe, 1931). The “free fermions” of E. Lieb, T. Schultz and D. Mattis (Lieb, Schultz & Mattis, 1961) are merely a simplified form of Bethe’s ansatz that yields all the states of the spin ½ “X-Y” model (but again, only in 1D.) Before recapitulating the solutions of what are, after all, very special models, it is fruitful to take a step back and reexamine, where does the sign and magnitude of the nearest-neighbor interactions \(J\) connecting individual spins come from? We proceed from the point of view of the more fundamental many-electron physics.
Nature of magnetic interactions in cells and in metals
Itinerant electrons occupying unfilled or partly filled energy bands are principally responsible for chemical binding, electronic conductivity and any significant magnetic correlations in metallic materials. The “band theory” of electrons is fundamental to answering questions about interactions among magnetic species (even when it is appropriate to use quasi-classical localized spins in the theory, as in the aforementioned rare earths,) especially when the interactions among them are mediated by itinerant electrons.
There are at present two acceptable ways to treat itinerant electrons: either by way of Schrödinger’s wave equation or by using the “tight-binding approximation.” The first approach is the simpler one for present purposes, so it is used here. (The second is more descriptive when, as in the rare earth metals, some electrons occupy degenerate, localized, orbitals in each unit cell while other electrons are itinerant, thereby requiring labels for each.)
In 1D, the nonrelativistic Hamiltonian of N purely itinerant electrons is, quite simply,
\[\tag{12} H = - \frac{\hbar}{2m} \sum_{j=1}^N \frac{\partial^2}{\partial x_j^2} + V(x_1,\ldots,V_N)\]
The differential equation \(H \psi (x1;\xi_1;\ldots; x_N;\xi_N) = E \psi (x1;\xi_1;\ldots; x_N;\xi_N)\ ,\) is to be solved for its full complement of energies \(E\) and eigenfunctions \(\psi\ .\) The potential energy is a function \(V\) that is almost an arbitrary function of the coordinate variables \(x_j\) as long as it is symmetric under their permutation in the interval \(0 <x_j<L\) (because the particles are indistinguishable the potential energy cannot distinguish among them.)
Although the set of \(\xi\) labeling the electrons’ (discrete, ±½) spin coordinates is absent from the Hamiltonian in Eq. (12), it is present in the eigenfunctions. Implementation of the Pauli principle requires both, because a particle is identified by specifying both its spatial and spin coordinates.
Pauli Principle. The interchange of 2 fermion particles, that is, of both the space and spin coordinates of any 2 particles in \(\psi\ ,\) causes the latter to change its sign.
All \(\psi\) are constrained to be totally antisymmetric under the group of permutation of particles, i.e. to change sign under an odd number of interchanges of both the spatial and spin coordinates of the particles. This requirement reflects a fundamental tenet of quantum theory that is most often stated (imprecisely) as, “no two fermions with the same spin can occupy the same state.”
The two-body forces in \(V\) could conceivably be made sufficiently strong that not more than one electron can occupy a given cell – but they need not be. The only approximations in writing an Hamiltonian in the form of (12) come from using the non-relativistic form of the kinetic energy and, in the potential energy, neglecting any further terms that depend on the particles’ momenta (including relativistic effects as spin-orbit coupling, generally thought to be negligible in many materials.) In this approximation, \(H\) commutes with the total spin operator and therefore the eigenstates of \(H\) can be labeled by the magnitude \(S\) of the total spin.
With these caveats, E.H. Lieb and the present author proved the following so-called “Lieb-Mattis theorem” in 1962. Let us denote it as “LM I”. It governs the eigenstate spectrum of the Hamiltonian in Eq. (12) and can be stated, in part, as follows,
If \(E_0(S)\) is the lowest energy eigenvalue belonging to total spin \(S\) of \(N\) electrons in 1D, then \(\tag{13} E_0(S) \le E_0(S + 1)\ .\)
The inequality (<) applies generally but the equality (=) applies only if the two-body potential is sufficiently singular. LM I can be tweaked to higher dimensions in the possible, albeit implausible, cases of potentials that are separable in the Cartesian coordinates x, y, … of the particles.
(Note: there is no contradiction between LM I and the so-called “exchange” correction to the Coulomb interaction that favors the largest possible spin in the ground state of partly occupied spherical shells of angular momentum l ≥ 1. This ferromagnetic exchange mechanism forms the theoretical basis for Hund’s rules; it is quantitatively verified in the spectra of d-and f-shell transition series atoms and ions, among others. Transition-series shells contain 2l + 1 degenerate spatial states for each particle; the exchange corrections lift the degeneracies for 2 or more particles in the shell, favoring the maximum total spin. There is no such degeneracy to be lifted in a one-dimensional model.)
To restate LM I more succinctly: in a 1D system of electrons it is impossible to lower the total energy by increasing the total magnetization. Thus Eq. (13) precludes ferromagnetism in 1D – not just at finite temperature, because of the lack of LRO – but also in the ground state at T = 0. This result applies to lc’s of lengths \(N=2,\ldots,\infty\ .\) It follows that to describe magnetic properties of some material, using a simpler but less fundamental Ising or Heisenberg Hamiltonian formalism governing a smaller set of dynamical variables, any physically correct nearest-neighbor interaction parameters \(J\) must have a sign appropriate to antiferromagnetism – and not to ferromagnetism.
The principal virtue of LM I is that it reduces the number of plausible theories of ferromagnetism to just those few models that, paradoxically, reconcile long-ranged ferromagnetic order with a short-range tendency to antiparallelism. In this article we discuss three such mechanisms.
Antiferromagnetism and ferrimagnetism
In the 1930’s Néel had already postulated the existence of antiferromagnetism in some magnetic salts. Note that this fits in well with the aforementioned theorem. Néel’s insight explained why such substances did not exhibit a macroscopic magnetic moment despite indirect evidence of internal spin dynamics.
Late in the 1960’s W. Heisenberg, together with H. Wagner and K. Yamasaki (Heisenberg, Wagner & Yamasaki, 1969) reprised the study of his eponymous antiferromagnet in 3D. They note that when viewed as a field theory, Heisenberg’s antiferromagnet has a spectrum similar to QED. In both instances the ground state has zero angular momentum and elementary excitations (magnons or photons) are transversal to the direction of propagation and (at long wavelengths) have dispersion linear in the momenta \(\left | k \right |\ .\)
Earlier in the 1960’s a second theorem by Lieb and Mattis addressed antiferromagnetism, ferro- and ferri-magnetism, all within this Heisenberg model, in all dimensions d. Denote this theorem “LM II”. (Lieb generalized it subsequently, to the Hubbard model of interacting electrons, in the special case of a half-filled band.)
LM II states in part that if in any dimension \(d\) a space lattice is bipartite, (i.e., if it can be subdivided into 2 sublattices, say \(A\) and \(B\ ,\) such that spins \(s\) on the \(A\) sublattice interact antiferromagnetically but only with spins \(s\) on the \(B\) sublattice, and vice-versa,) the maximum spin in the ground state is exactly \(S_{total}= s \left | N_A-N_B \right |\ .\)
\(S_{total}\) is only identically zero if \(N_A = N_B\) but otherwise, the ground state spin can be finite. Thus there can be ferromagnetic LRO in the ground state even though the magnetization only attains the smallest value possible and decreases further with increasing temperature \(T\ .\) In \(d \ge 3\) lattices this LRO can persist over the entire range of \(T < T_c\) but in \(d \le 2\) lattices, we note once again that, because of continuous symmetry, \(T_c =0\ .\)
In bipartite lattices where \(N_A = N_B\) but the magnitude of the individual spins \(s_A\) on sites of the \(A\) sublattice ≠ that of the \(s_B\ ,\) LM II shows that even though the A-B couplings are antiferromagnetic the ground state can have a magnetic moment corresponding to maximum spin \(S_{total}= N \left | s_A-s_B \right |\ .\) More generally, the actual proof of LM II indicates that the ground state spin on bipartite lattices can attain \(S_{total}= \left | N_As_A-N_Bs_B \right |\ .\) Ferrimagnetism is a good example found in nature.
This phenomenon is named after the ferrites; in magnetic iron ore, magnetite (\(Fe^3O^4\)), a spin 1 \(Fe^{2+}\) is neighbor to a spin ½ \(Fe^{3+}\ .\) Despite being antiparallel, their spins do not compensate. In other oxides or salts forming bipartite lattices, \(N_A = 2N_B\) (or other multiples) are possible. In any event, the macroscopic magnetism in a ferrimagnet is spread over the entire lattice just as in a ferromagnet, and like this last, it loses LRO at any finite \(T > 0\) in d=1 or 2 dimensions in the absence of an ordering external magnetic field; however, the situation is different in d ≥ 3.
In d ≥ 3 the ferrimagnet can, just like the hypothetical ferromagnet it resembles, exhibit spontaneously broken symmetry and LRO that gradually decreases with increasing \(T\) and vanishes only at a finite Curie temperature \(T_c\ .\) (Again, for \(T \ge T_c\) all LRO is extinguished.) Its elementary excitations, the magnons, exhibit ferromagnetic-type dispersion \(\omega \propto \left | \vec k \right |^2\) at long wavelengths. This dispersion reverts to an antiferromagnetic-type dispersion \(\omega \propto \left | \vec k \right |\) for short wavelength excitations.
Although it is an instructive model, one cannot consider ferrimagnetism as providing a general picture of ferromagnetism, given that the model applies principally to magnetic salts with localized spins and provides no explanation for LRO in metals such as iron.
The Kondo lattice
A different mechanism, derived from the model single-spin impurity “Kondo effect,” has been invoked by many authors to explain “heavy fermion” ferromagnetism in rare earth solids. This so-called “Kondo lattice” provides a more general framework in which to situate ferromagnetism. Imagine itinerant electrons (if they are confined to a narrow band they are “heavy,” otherwise not) that interact locally with the localized spin of an f-shell (magnitude s = ½ or greater.)
Hund’s rule teaches that within the \(f\)-shell the interactions align the spins of the electrons into a total spin S; therefore the low-lying Hilbert space of each atom or ion is limited to the corresponding \(2S+1\) states of \(S_z\) ranging from \(- S\) to \(+S\ .\)
The interaction between the itinerant and \(f\)-shell electrons can be expressed in Heisenberg form with, typically, \(J < 0\ ,\) given that itinerant and localized electrons belong to different shells and that intershell intraatomic interactions typically – but not necessarily – favor antiparallelism. We also consider \(J > 0\) to investigate exceptional cases. The Hamiltonian that includes both the motion of the band electrons and their interactions with localized spins is inherently quantum mechanical. It is,
\[\tag{14} H = H_0 - J \sum_i \vec S_i \cdot \vec \sigma_i\]
The operators governing localized spin angular momenta are the components of \(\vec S_j = (S_{x,j},S_{y,j},S_{z,j})\) whose irreducible representations are \(2s+1\) dimensional matrices. The spin of the itinerant electron can be expressed in the Wannier creation/destruction operators localized at site\( R_j\ .\) It is \(\vec \sigma_j = \frac{1}{2} (\psi_{j,\uparrow}^{\dagger},\psi_{j,\downarrow}^{\dagger}) \cdot \vec \sigma \cdot \begin{pmatrix} \psi_{j,\uparrow} \\ \psi_{j,\downarrow} \end{pmatrix}\ .\)
The Hamiltonian of the itinerant particles is given by \(H_0 = -t \sum_{(i,j)} \sum_{\xi=-1/2}^{+1/2} (\psi_{j,\xi}^{\dagger} \psi_{i,\xi}+H.c.)\ ,\) otherwise known as the “tight-binding” energy-band Hamiltonian. (Sites \(i\) and \(j\) are nearest neighbors.) Expanding the Wannier operators in a series of Bloch operators on a cubic lattice\[\psi_{j,\xi}= \sqrt{1/N} \sum_k e^{-ik \cdot R_j} c_{k,\xi}\ ,\] the energy-band Hamiltonian becomes:
\[\tag{15} H_0 = -2t \sum_{k,\xi} (\cos k_1 + \ldots + \cos k_D) c_{k,\xi}^{\dagger} c_{k,\xi}\]
where \(k\) spans the “first Brillouin Zone,” i.e, \(-\pi/a < k_x < \pi/a\ ,\) … , \(-\pi/a < k_D < \pi/a\ .\) The band structure on these simple lattices is just the sum of \(\cos k_j\) terms. The eigenstates of (14) are complicated functions of the number of itinerant electrons \(N\ ,\) the dimensions \(d\ ,\) and the ratio \(J/t\ .\) Interactions break the translational invariance. So while typical eigenstates for either sign of J are generally metallic (owing to the itinerancy) and magnetic, an insulating phase is also possible. The ground state at \(T=0\) can sustain magnetic LRO in all dimensions d ≥1.
If \(N=N\) precisely (i.e., in the case of a half-filled band,) the ground state, which normally is metallic, is instead, an insulator with an energy gap O(J). For J < 0 each cell has spin | s–½ |, which vanishes if the \(f\)-shell contains only a single electron but is otherwise non-zero. If J > 0 each cell has spin | s+½ | in the ground state. Nevertheless at half-filling the effective coupling between nearest neighbor spins due to motional energy \(H_0\) is itself always antiferromagnetic, hence the half-filled band is an antiferromagnetic insulator.
Now consider the special case s = ½, J < 0 , and suppose the number of electrons is N ≠ N. Define the excess number: n=N – N. This quantity can be as positive as +N or as negative as –N.
In 1D, it has been proved that the ground state spin is ½\(\left | n \right |\) for all values of \(n\ .\) This suggests that the ground state is a ferrimagnet of some sort, and that the A and B sublattices consist of the itinerant and the localized electrons respectively. This may not be the case in higher dimensions, and the model appears to lean more to ferrimagnetism in d=1 than in higher dimensions (note).
Paradoxically, for J > 0 (but not too large compared to t,) and |n| arbitrary (but small compared with N,) the individual atoms acquire maximum spin but the overall spontaneous magnetization vanishes.
Starting some 3 decades ago, a voluminous literature in the Kondo lattice model has been motivated by interest in the heavy fermions, high-temperature superconductivity and other magnetic phenomena. The topic deserves further investigation. The phase diagram as a function of \(s\ ,\) \(J/t\) and \(n\) in d dimensions is obviously very complex, with ferromagnetic, spiral magnetic and superconducting phases all within the realm of possibilities.
Nagaoka mechanism of ferromagnetism
The simplest model displaying ferromagnetism is named after Nagaoka. Its description follows\[N\] electrons in a nondegenerate band (\(H_0\) of Eq. (14) is a good example) are subject to a large, repulsive on-site potential such that no two electrons, even though they have opposite spins, can occupy the same site. The hopping matrix element that connect 2 neighboring sites is \(t\ .\) Thus the Hamiltonian, known as the Hubbard Hamiltonian if \(U^*\) is finite, is:
\[\tag{16} H_{nagao} =H_0 + U^* \sum_j c_{\uparrow}^{\dagger} (R_j) c_{\uparrow} (R_j) c_{\downarrow}^{\dagger} (R_j) c_{\downarrow} (R_j)\]
The interaction \(U^*\) is a local repulsive potential. When taken in \(\lim U^* \to +\infty\) the second term in (16) acts as a projection operator to prevent double occupancy of any of the \(N\) given sites, each of which can still be empty or occupied by a single electron.
If N =N–1 there is one “hole.” If the arbitrary disposition of spins around the hole is different for each position of the hole, then the hole can diffuse but its bandwidth \(4 \left | t \right | d\) is reduced to \(4 \left | t_{eff} \right | d\) with \(t_{eff}\) smaller than \(t\ ,\) its value depending on the details of the spin distribution. There is, however, one exception: If all spins are parallel – i.e. maximally ferromagnetic – then, because of translational invariance, \(t_{eff} \to t\ .\) So the energy of a single hole is lowest, at \( -2 \left | t \right | d\ ,\) in the ferromagnetic state.
Given that the ferromagnetic states(s) are the lowest in energy in the presence of 1 (or a few) holes, this must change when the density of holes exceeds a critical value. To lower their kinetic energy the electrons must cease to have all parallel spins (Gulacsi & Vollhardt, 2005), and an ordinary Fermi liquid of electrons, half of which have spins up and the other half down, is recovered, with any residual interactions then presumably given by the Ruderman-Kittel-Yosida mechanism. The critical number of holes beyond which the ferromagnetism is lost is estimated at a few percent of N in 3D (there are calculations for various lattices but no exact formula) and zero in d < 3.
Frustration is a phenomenon whereby antiferromagnetic bonds cannot all be satisfied, thereby causing nontrivial symmetry breaking. The simple hypercubic lattices are bipartite and unfrustrated. But if there were antiferromagnetic interactions among members of the \(A\) sublattice and/or among members of the \(B\) sublattice, in addition to the antiferromagnetic bonds connecting spins on one sublattice with the other, it would become difficult or impossible for the spins on either of the sublattices to remain parallel to one other in the ground state. This is what happens in geometrical frustration. For this reason the abovementioned LM II simply does not apply to geometrically frustrated lattices.
Conceptually the simplest example of geometrical frustration is the triangular lattice. As the simplest and smallest example of geometrical frustration, consider a single triangle of 3 spins that are coupled antiferromagnetically. Whether the spins are in the quantum or classical limits, all three bonds cannot be simultaneously satisfied. This is to be distinguished from frustration caused by the random signs of the interactions in spin glasses.
One achieves a “sort of” geometrical frustration in the 1D Heisenberg antiferromagnet by augmenting the nearest-neighbor antiferromagnetic bonds \(J\) by second-neighbor antiferromagnetic bonds \(J'\ ;\) at some ratio of\( J'/J=O(1)\) the ground state solution ceases to exhibit LRO and breaks translational invariance spontaneously, turning itself into products of localized singlet (spin zero) clusters. Frustration raises the energies of the ground state and low-lying states and, typically, their degeneracies also.
In 2D there exist macroscopic models in which the free energy can be solved exactly at all \(T\) provided the interactions are restricted to nearest-neighbors. Consider the triangular Ising antiferromagnet (\(J < 0\ ,\)) a prime example of geometric frustration. While the ferromagnet on a triangular lattice has a finite Curie temperature, the antiferromagnet on the same lattice has none (as first shown by G. Wannier.) Its entropy remains macroscopic even at \(T = 0\ ,\) in flagrante delicto of the Third Law of thermodynamics.
The Ising model on a unfrustrated \(sq\) lattice, in which random \(\pm J\) interactions are frozen-in with equal probability, can also be solved explicitly and exactly (Forgacs, 1980; Mattis & Swendsen, 2008). Although the geometry is not frustrated, many of the plaquettes (all those with an odd number of antiferromagnetic bonds) are. This stochastic system – the prototype of what is often called a spin glass – also has a macroscopic entropy at \(T = 0\ ,\) in violation of the Third Law. The Curie temperature is lowered by this disorder and the maximally frustrated Ising spin glass has \(T_c = 0\ .\) Thus, distinct sources of frustration may have similar outcomes.
Defective antiferromagnets
In an Heisenberg bipartite antiferromagnet of spins ½, if one just plucks out a single site from the \(B\) sublattice at random the resulting ground state belongs to total spin \(S\)=½. If two sites on the same sublattice are plucked out, the total spin is 1 but if the plucked sites are nearby but on distinct sublattices, the resulting spin is again \(S=0\) (see LM I and II.)
Quantum Monte-Carlo calculations in 2D have shown a surprising but logical result: each missing spin is locally compensated (“screened”) by spin deviations carved out of a finite but large neighborhood of the defect, with an amplitude that drops off with distance. Being so spread out, its differential thermodynamical properties (e.g., the free energy of the defective lattice less the free energy of the same lattice without defect) re-appear as those of a classical vector spin \(S\ .\) Thus, even though the underlying eigenstates are all quantum mechanical the differential magnetic susceptibility near the defects is that of a classical spin \(S\ :\) \(\chi_{imp} = \frac{S^2}{T}+\ldots\) (and not \(\frac{S(S+1)}{T}\) +…) (Nagosa, 1989), where “…” indicate logarithmic corrections higher-order in \(T\ .\)
Decomposition of 1D quantum antiferromagnets into fermions
With ferromagnetic interactions, the ground state is typically a product function and the excited states are, in general, spin waves. On the other hand, the ground state of antiferromagnets is always complex and the excited states may or may not follow a simple pattern. In some artificial cases with nearest- and next-nearest neighbor interactions, models of quantum spins can be solved, i.e. can have some or all their eigenstates retrieved. But it is only in d = 1 dimension with nearest-neighbor interactions that there sufficient simplification to obtain all the eigenstates, thermodynamics, and any other observable property. Consider a reasonably anisotropic Heisenberg model Hamiltonian on the \(lc\ ,\)
\[\tag{17} H = \sum_n \left ( \frac{1}{2} (S_n^+ S_{n+1}^- +H.c.) + gS_{z,n}S_{z,n+1} \right )\]
It is an X-Y model if g = 0, the Heisenberg antiferromagnet if g =1 and the Heisenberg ferromagnet for g=–1. For spins ½ the Bethe ansatz yields the ground state and spectrum of excited states at all g. The idea is that if we start from the “vacuum” (all spins “down”,) a 2 spin-wave excitation takes the form, \(\sum_i \sum_j f_{i,j} S_i^+ S_j^+ | all \downarrow >\) where\[f_{i,j} = \exp (i(ki+k'j+\frac{1}{2} + \psi_{k,k'}) + \exp (i(k'i+kj+\frac{1}{2} - \psi_{k,k'})\] (un-normalized) for i > j, a sum of the only two product states having the same motional energy and the same momentum \((k+k')\ .\) If i < j, changes sign. Instead of scattering, which is what occurs in d≥2 dimensions, in 1D solvable models there is only refraction.
In the spin ½ X-Y model, the interaction g=0 and the boundary condition that enforces \((S_n^{\pm})^2 \equiv 0\) requires the phase factor to be \(\psi = \pm \pi\ ,\) independent of \(k,k'\ .\) Then \(f\) can be identified as a determinantal function, one that is easily generalized to accommodate any number of spin-wave excitations.
An easier way to obtain all the solutions is to take advantage of the Jordan-Wigner transformation introduced earlier, and a-priori express the spins in terms of fermions \(c_i\) with “tails” on a finite \(lc\) that starts at n=1 and ends at \(N\ .\) We start by transforming the Hamiltonian into its fermion representation,
\[\tag{18} H = \sum_n \left ( \frac{1}{2} (c_n^{\dagger} c_{n+1} +H.c.) + g (c_n^{\dagger} c_n +1/2)(c_{n+1}^{\dagger} c_{n+1} +1/2) \right )\]
(the exponential tails cancel in the case of nearest-neighbor interactions.) This expresses the similarity between a one-dimensional gas of spinless fermions with the anisotropic Heisenberg model in 1D. One-fermion states can be expanded in \(\sin (kn)\) functions\[c_n = \sqrt{\frac{2}{N}} \sum_k a(k) \sin (kn)\] where the \(k = \frac{\pi}{N+1}m\ ,\) with \(m\) an integer in the range \(1, N\ ,\) to satisfy \(c_{N+1} = 0\ .\) Then the X-Y part of the Hamiltonian reduces to diagonal form with \(a^{\dagger}(k)a(k)=n(k)\) the only remaining operators. (The thermal average occupancy number is the Fermi function\( <n(k)>=\frac{1}{e^{\epsilon/kT + 1}}\) with \(\epsilon(k) = \cos k\ .\))
If g≠0 the calculations are more elaborate. Phase shifts take the nearest-neighbor interactions into account (repulsion for g > 0 or attraction for g < 0) and must be summed over all pairs of particles and their N! permutations. Thus,
\[\tag{19} f_{i,j,m,n,\ldots} = \sum_{permutations P} (-1)^P \exp i(k_1 i+k_2 j+\ldots+\frac{1}{2} \sum_{r<t} \psi(k_r,k_t))\]
The phase shifts are self-consistent functions of \(g, k\) and \(k'\ .\) After some lengthy algebra one finds that for \(g\) in the range \(-1 \le g \le +1\ ,\) the quasi-particle spectrum of excitations is gapless, mapping onto that of the X-Y model. For \(|g| > 1\ ,\) the spectrum is gapped. The thermodynamics is essentially that of an Ising ferromagnet if \(g\) << –1 and of an antiferromagnet if \(g\) >> +1.
Decades after these solutions were understood for spins ½ and the various features that depend on the nature of the excitations spectrum (magnetic susceptibility, specific heat and other thermodynamic properties that can be obtained from the correlation functions) had been calculated explicitly, these results were (incorrectly) generalized by several authors to higher spins on the \(lc\ ,\) notably to integer spins 1, etc. What invalidates all such generalizations are the subsidiary conditions discovered by C.N. Yang and R. Baxter, the Yang-Baxter (YB) relations, satisfied for three or more spins ½ on a \(lc\) but not for any of the higher spins.
Briefly, the \(YB\) relations deal with the effects of repeated scatterings. If scattering of \(k_1\) and \(k_2\) (abbreviated 1 & 2) followed by 1 & 3 leads to a phase shift that differs from scattering of 1 & 2 followed by 2 & 3, Bethe’s ansatz is invalid and scattering theory must be used.
More simply, we can just point out that the transformation of spins to fermions fails for quantum spins \(s\) > ½, hence Eq. (18) is no longer equivalent to the original Eq. (17) and the fermionic solutions to Eq. (18) become irrelevant. Haldane found that the categories of integer spins and half-odd-integer spins are distinct inasmuch as they could be mapped onto distinct field theories (Haldane, 1983). Even in the parameter range \(|g| \le 1\) that maps onto \(XY\ ,\) the \(lc\) of integer spins has a gapped spectrum while half-odd-integer spins have a gapless spectrum. All dynamical and thermodynamical properties then differ to that extent.
Physics and mathematics have always had a close relation, but none closer than the calculation of magnetism using concepts in analysis, number theory, algebra and group theory. We have tried to show this in the present article by concentrating on just a few of the topics that have come up in an evolving theory of magnetism. Hopefully, an even broader understanding of magnetic phenomena will follow new mathematics and mathematical concepts.
While not numbered, references/footnotes are listed in order of their appearance in the text.
• A. Levinovitz and N. Ringertz, Eds., The Nobel Prize, the first 100 Years, Imperial College Press, London, 2001
• Daniel C. Mattis, The Theory of Magnetism Made Simple: an introduction to physical concepts and to some useful mathematical methods, World Scientific Publ. Co., Singapore, 2006; the development of various aspects of magnetism and of its associated theories is the subject of chapters 1 and 2 and an extensive bibliography leads to the original documentation.
• much of this material is discussed in greater depth in the various chapts. 3 – 9 of ref. 2.
• P.A.M. Dirac, Proc. Roy. Soc. (London) A123, 60 (1931), also see pp. 76,78, ref. 2. The magnetic monopole has never been observed, but the existence of just one such monopole would necessitate quantization of all electric charges in the universe, a known fact of nature – and one that is otherwise unexplained.
• ref. 2, p. 31 recounts the 1920’s history of this concept that culminated in the constant \(\mu_B =\frac{e \hbar}{2m_ec} = 0.927 x 10^{-20}\ ,\) in which \(m_e\) is the mass of the electron
• ref. 2, chapt. 8.
• H. Kramers and G. Wannier, Phys. Rev. 60, 252, 263 (1941)
• L. Onsager, Phys. Rev. 65, 117 (1944)
• T. Schultz, D. Mattis and E. Lieb, Rev. Mod. Phys. 36, 856 (1964)
• see ref. 2, chapt. 3, §3.12
• As we shall see later, this statement does not apply to the special case of one-dimensional antiferromagnets, in which the magnitude of the individual spins plays an important role.
• N. Mermin and H. Wagner, Phys. Rev. Lett. 17, 1133 and 1307 (1966)
• D. Mattis, Phys. Lett. 104, 357 (1984)
• H. Bethe, Zeit. f. Physik 71, 205 (1931), reprinted in English translation in D. Mattis, The Many-Body Problem, an encyclopedia of exactly solved models in one dimension, World Scientific Publ., Singapore, 2009 (3rd Printing with revisions and corrections.)
• E. Lieb, T. Schultz and D. Mattis, Ann. Phys. (NY) 16, 407 (1961), also reprinted in its entirety in The Many-Body Problem cited above. Among recent applications, note J. Jing and H. Ma, Level Crossing and Quantum Phase Transition of XY Ring, Mod. Phys. Lett. B22, 535 (2008)
• W. Heisenberg, H. Wagner and K. Yamazaki, Nuov. Cim. LIX A, (1 Feb. 1969)
• We list some early papers: P. W. Anderson, Heavy-electron superconductors, spin fluctuations and triplet pairing, Phys. Rev. B30, 1549 (1984), G. Baskaran and P.W. Anderson, Gauge theory of high-temperature superconductors and strongly correlated Fermi systems, Phys. Rev. B37, 580 (1988), P. Fazekas and E. Muller-Hartmann, Magnetic and nonmagnetic ground states of the Kondo lattice, Zeit. f. Phys. B85, 285 (1991), M. Sigrist, H. Tsunetsuga and K. Ueda, Rigorous results for the one-electron Kondo lattice model, Phys. Rev. Lett. 67, 2211 (1991), J.A. White, Numerical exact diagonalization of the one-dimensional symmetric Kondo lattice, Phys. Rev. B46, 13905 (1992), P.Paul and D. Mattis, Exctinction of spin interactions in the 2D Kondo lattice, Int. J.Mod. Phys. B24, 3199 (1995), H. Tsunetsuga, M. Sigrist and K. Ueda, The ground state phase diagram of the one dimensional Kondo lattice model, Rev. Mod. Phys. 69, 809-864 (1997), S. Capponi and F.F. Assaad, Spin and charge dynamics of the ferromagnetic and antiferromagnetic two-dimensional half-filled Kondo model, Phys. Rev. B63, 155114 (2001)
• Z. Gulacsi and D. Vollhardt find this in a similar model (the periodic Anderson model) that can be solved exactly; see arXiv:cond-mat/0504174v1 (7 April, 2005)
• G. Forgacs, Phys. Rev. B22, 4473 (1980). See also D. Mattis and R. Swendsen, Statistical Mechanics Made Simple, 2nd Edition, World Scientific Publ. Co, Singapore, 2008, §8.12, for the explicit solution of the transfer matrix in this example and for a discussion of unfrustrated (separable model) spin glasses.
• N. Nagosa, Y. Hatsugai and M. Imada, J. Phys. Soc. Jpn. 58, 978 (1989), with further refs. given in ref 2., chapt. 5.
• F.D.M. Haldane, Phys. Lett. A93, 454 (1983) and Phys. Rev. Lett. 50, 1153 (1983), see also: D. Controzzi and E. Hawkins, Int. J. Mod. Phys. B9, 4449 (2005)
External links
See also
Personal tools
Focal areas |
8de03cf51da7652d | Saturday, January 21, 2012
Some parallels between classical and quantum mechanics
This isn't really a blog post. More of something I wanted to interject in a discussion on Google plus but wouldn't fit in the text box.
I've always had trouble with the way the Legendre transform is introduced in classical mechanics. I know I'm not the only one. Many mathematicians and physicists have recognised that it seems to be plucked out of a hat like a rabbit and have even written papers to address this issue. But however much an author attempts to make it seem natural, it still looks like a rabbit to me.
So I have to ask myself, what would make me feel comfortable with the Legendre transform?
The Legendre transform is an analogue of the Fourier transform that uses a different semiring to the usual. I wrote briefly about this many years ago. So if we could write classical mechanics in a form that is analogous to another problem where I'd use a Fourier transform, I'd be happier. This is my attempt to do that.
When I wrote about Fourier transforms a little while back the intention was to immediately follow it with an analogous article about Legendre transforms. Unfortunately that's been postponed so I'm going to just assume you know that Legendre transforms can be used to compute inf-convolutions. I'll state clearly what that means below, but I won't show any detail on the analogy with Fourier transforms.
Free classical particles
Let's work in one dimension with a particle of mass whose position at time is . The kinetic energy of this particle is given by . Its Lagrangian is therefore .
The action of our particle for the time from to is therefore
The particle motion is that which minimises the action.
Suppose the position of the particle at time is and the position at time is . Then write for the action minimising path from to . So
where we're minimising over all paths such that .
Now suppose our system evolves from time to . We can consider this to be two stages, one from to followed by one from to . Let be the minimised action analogous to for the period to . The action from to is the sum of the actions for the two subperiods. So the minimum total action for the period to is given by
Let me simply that a little. I'll use where I previously used and for . So that last equation becomes:
Now suppose is translation-independent in the sense that . So we can write . Then the minimum total action is given by
Infimal convolution is defined by
so the minimum we seek is
So now it's natural to use the Legendre transform. We have the inf-convolution theorem:
where is the Legendre transform of given by
and so (where we use to represent Legendre transform with respect to the spatial variable).
Let's consider the case where from onwards the particle motion is free, so . In this case we clearly have translation-invariance and so the time evolution is given by repeated inf-convolution with and in the "Legendre domain" this is nothing other than repeated addition of .
Let's take a look at . We know that if a particle travels freely from to over the period from to then it must have followed the minimum action path and we know, from basic mechanics, this is the path with constant velocity. So
and hence the action is given by
So the time evolution of is given by repeated inf-convolution with a quadratic function. The time evolution of is therefore given by repeated addition of the Legendre transform of a quadratic function. It's not hard to prove that the Legendre transform of a quadratic function is also quadratic. In fact:
Addition is easier to work with than inf-convolution so if we wish to understand the time evolution of the action function it's natural to work with this Legendre transformed function.
So that's it for classical mechanics in this post. I've tried to look at the evolution of a classical system in a way that makes the Legendre transform natural.
Free quantum particles
Now I want to take a look at the evolution of a free quantum particle to show how similar it is to what I wrote above. In this case we have the Schrödinger equation
Let's suppose that from time onwards the particle is free so . Then we have
Now let's take the Fourier transform in the spatial variable. We get:
We can write this as
So the time evolution of the free quantum particle is given by repeated convolution with a Gaussian function which in the Fourier domain is repeated multiplication by a Gaussian. The classical section above is nothing but a tropical version of this section.
I doubt I've said anything original here. Classical mechanics is well known to be the limit of quantum mechanics as and it's well known that in this limit we find that occurrences of the semiring are replaced by the semiring . But I've never seen an article that attempts to describe classical mechanics in terms of repeated inf-convolution even though this is close to Hamilton's formulation and I've never seen an article that shows the parallel with the Schrödinger equation in this way. I'm hoping someone will now be able to say to me "I've seen that before" and post a relevant link below.
I'm not sure how the above applies for a non-trivial potential . I wrote this little Schrödinger equation solver a while back. As might be expected, it's inconvenient to use the Fourier domain to deal with the part of the evolution due to . In order to simulate a time step of the code simulates in the Fourier domain assuming the particle is free and then spends solving for the -dependent part in the spatial domain. So even in the presence of non-trivial it can still be useful to work with a Fourier transform. Almost the same iteration could be used to numerically compute the action for the classical case.
Blog Archive |
478928514b8df0ed | Monday, August 31, 2015
Evidence of ancient life discovered in mantle rocks deep below the seafloor
Physicalist who has learned his lessons sees life, evolution, generation of genetic code, etc.. as random thermal fluctuations. Empirical facts suggests that the situation is just the opposite. The emergence of life seems to be unavoidable but water seems to be a prerequisite for it. Now researchers have found evidence for ancient life in deep mantle rocks for about 125 million years ago. The emergence of life in mantle is believed to involve interaction of rocks with hydrothermal water originating from seawater and circulating in mantle (see the illustration).
A serious objection against the successful Urey-Miller experiments as a guideline to how prebiotic life emerged is that the atmosphere was not at that time reducing (reducing means that there are atoms able to donate electrons, oxygen does just the reverse).
This objection could serve as a motivation for assuming that prebiotic life evolved in mantle. For detailed vision about underground prebiology see the article More Precise TGD Based View about Quantum Biology and Prebiotic Evolution. This model predicts that Cambrian Explosion lasting for 20-25 million years was associated with a sudden expansion of Earth radius by a factor 2 about 542 million years ago.
Expanding Earth hypothesis would be reduced in TGD framework to the replacement of continuous cosmic expansion for astrophysical objects with a sequence of short expanding periods followed by long non-expanding periods in accordance with the finding that astrophysical objects do not participate in cosmic expansion (see this). This sudden expansion would have led to a burst of underground oceans to the surface of Earth and generated the oceans as we know them now. This prediction is consistent with the assumption that the hydrothermal water explaining the above described finding originated from seawater.
A killer prediction is that underground life would have developed photosynthesis, and the lifeforms would have been rather highly evolved as they burst on the surface of Earth. How this was possible is one of the questions for which answer is proposed in the article More Precise TGD Based View about Quantum Biology and Prebiotic Evolution. The new physics predicted by TGD - in particular hierarchy of Planck constants identified in terms of dark matter - is an essential element of the model.
Ontology-Epistemology duality?
One can however develop objections.
Consider now possible objections.
Sunday, August 30, 2015
Sharpening of Hawking's argument
I already told about the latest argument of Hawking to solve information paradox associated with black holes (see this and this).
There is now a popular article explaining the intuitive picture behind Hawking's proposal. The blackhole horizon would involve tangential flow of light and particles of the infalling matter would induce supertranslations on the pattern of this light thus coding information about their properties to this light. After that this light would be radiated away as analog of Hawking radiation and carry out this information.
The objection would be that in GRT horizon is no way special - it is just a coordinate singularity. Curvature tensor does not diverge either and Einstein tensor and Ricci scalar vanish. This argument has been used in the firewall debates to claim that nothing special should occur as horizon is traversed. Why light would rotate around it? No reason for this!
The answer in TGD would be obvious: horizon is replaced for TGD analog of blackhole with a light-like 3-surface at which the induced metric becomes Euclidian. Horizon becomes analogous to light front carrying not only photons but all kinds of elementary particles. Particles do not fall inside this surface but remain at it!
The objection now is that photons of light front should propagate in direction normal to it, not parallel. The point is however that this light-like 3-surface is the surface at which induced 4-metric becomes degenerate: hence massless particles can live on it.
Wednesday, August 26, 2015
TGD view about black holes and Hawking radiation: part II
In the second part of posting I discuss TGD view about blackholes and Hawking radiation. There are several new elements involved but concerning black holes the most relevant new element is the assignment of Euclidian space-time regions as lines of generalized Feynman diagrams implying that also blackhole interiors correspond to this kind of regions. Negentropy Maximization Principle is also an important element and predicts that number theoretically defined black hole negentropy can only increase. The real surprise was that the temperature of the variant of Hawking radiation at the flux tubes of proton Sun system is room temperature! Could TGD variant of Hawking radiation be a key player in quantum biology?
The basic ideas of TGD relevant for blackhole concept
My own basic strategy is to not assume anything not necessitated by experiment or not implied by general theoretical assumptions - these of course represent the subjective element. The basic assumptions/predictions of TGD relevant for the recent discussion are following.
1. Space-times are 4-surfaces in H=M4× CP2 and ordinary space-time is replaced with many-sheeted space-time. This solves what I call energy problem of GRT by lifting gravitationally broken Poincare invariance to an exact symmetry at the level of imbedding space H.
GRT type description is an approximation obtained by lumping together the space-time sheets to single region of M4, with various fields as sums of induced fields at space-time surface geometrized in terms of geometry of H.
Space-time surface has both Minkowskian and Euclidian regions. Euclidian regions are identified in terms of what I call generalized Feynman/twistor diagrams. The 3-D boundaries between Euclidian and Minkowskina regions have degenerate induced 4-metric and I call them light-like orbits of partonic 2-surfaces or light-like wormhole throats analogous to blackhole horizons and actually replacing them. The interiors of blackholes are replaced with the Euclidian regions and every physical system is characterized by this kind of region.
Euclidian regions are identified as slightly deformed pieces of CP2 connecting two Minkowskian space-time regions. Partonic 2-surfaces defining their boundaries are connected to each other by magnetic flux tubes carrying monopole flux.
Wormhole contacts connect two Minkowskian space-time sheets already at elementary particle level, and appear in pairs by the conservation of the monopole flux. Flux tube can be visualized as a highly flattened square traversing along and between the space-time sheets involved. Flux tubes are accompanied by fermionic strings carrying fermion number. Fermionic strings give rise to string world sheets carrying vanishing induced em charged weak fields (otherwise em charge would not be well-defined for spinor modes). String theory in space-time surface becomes part of TGD. Fermions at the ends of strings can get entangled and entanglement can carry information.
2. Strong form of General Coordinate Invariance (GCI) states that light-like orbits of partonic 2-surfaces on one hand and space-like 3-surfaces at the ends of causal diamonds on the other hand provide equivalent descriptions of physics. The outcome is that partonic 2-surfaces and string world sheets at the ends of CD can be regarded as basic dynamical objects.
Strong form of holography states the correspondence between quantum description based on these 2-surfaces and 4-D classical space-time description, and generalizes AdS/CFT correspondence. Conformal invariance is extended to the huge super-symplectic symmetry algebra acting as isometries of WCW and having conformal structure. This explains why 10-D space-time can be replaced with ordinary space-time and 4-D Minkowski space can be replaced with partonic 2-surfaces and string world sheets. This holography looks very much like the one we are accustomed with!
3. Quantum criticality of TGD Universe fixing the value(s) of the only coupling strength of TGD (Kähler coupling strength) as analog of critical temperature. Quantum criticality is realized in terms of infinite hierarchy of sub-algebras of super-symplectic algebra actings as isometries of WCW, the "world of classical worlds" consisting of 3-surfaces or by holography preferred extremals associated with them.
Given sub-algebra is isomorphic to the entire algebra and its conformal weights are n≥ 1-multiples of those for the entire algebra. This algebra acts as conformal gauge transformations whereas the generators with conformal weights m<n act as dynamical symmetries defining an infinite hierarchy of simply laced Lie groups with rank n-1 acting as dynamical symmetry groups defined by Mac-Kay correspondence so that the number of degrees of freedom becomes finite. This relates very closely to the inclusions of hyper-finite factors - WCW spinors provide a canonical representation for them.
This hierarchy corresponds to a hierarchy of effective Planck constants heff=n× h defining an infinite number of phases identified as dark matter. For these phases Compton length and time are scale up by n so that they give rise to macroscopic quantum phases. Super-conductivity is one example of this kind of phase - charge carriers could be dark variants of ordinary electrons. Dark matter appears at quantum criticality and this serves as an experimental manner to produce dark matter. In living matter dark matter identified in this manner would play a central role. Magnetic bodies carrying dark matter at their flux tubes would control ordinary matter and carry information.
4. I started the work with the hierarchy of Planck constants from the proposal of Nottale stating that it makes sense to talk about gravitational Planck constant hgr=GMm/v0, v0/c≤ 1 (the interpretation of symbols should be obvious). Nottale found that the orbits of inner and outer planets could be modelled reasonably well by applying Bohr quantization to planetary orbits with tge value of velocity parameter differing by a factor 1/5. In TGD framework hgr would be associated with magnetic flux tubes mediating gravitational interaction between Sun with mass M and planet or any object, say elementary particle, with mass m. The matter at the flux tubes would be dark as also gravitons involved. The Compton length of particle would be given by GM/v0 and would not depend on the mass of particle at all.
The identification hgr=heff is an additional hypothesis motivated by quantum biology, in particular the identification of biophotons as decay products of dark photons satisfying this condition. As a matter fact, one can talk also about hem assignable to electromagnetic interactions: its values are much lower. The hypothesis is that when the perturbative expansion for two particle system does not converge anymore, a phase transition increasing the value of the Planck constant occurs and guarantees that coupling strength proportional to 1/heff increases. This is one possible interpretation for quantum criticality. TGD provides a detailed geometric interpretation for the space-time correlates of quantum criticality.
Macroscopic gravitational bound states not possible in TGD without the assumption that effective string tension associated with fermionic strings and dictated by strong form of holography is proportional to 1/heff2. The bound states would have size scale of order Planck length since for longer systems string energy would be huge. heff=hgr makes astroscopic quantum coherence unavoidable. Ordinary matter is condensed around dark matter. The counterparts of black holes would be systems consisting of only dark matter.
5. Zero energy ontology (ZEO) is central element of TGD. There are many motivations for it. For instance, Poincare invariance in standard sense cannot make sense since in standard cosmology energy is not conserved. The interpretation is that various conserved quantum numbers are length scale dependent notions.
Physical states are zero energy states with positive and negative energy parts assigned to ends of space-time surfaces at the light-like boundaries of causal diamonds (CDs). CD is defined as Cartesian products of CP2 with the intersection of future and past directed lightcones of M4. CDs form a fractal length scale hierarchy. CD defines the region about which single conscious entity can have conscious information, kind of 4-D perceptive field. There is a hierarchy of WCWs associated with CDs. Consciously experienced physics is always in the scale of given CD.
Zero energy states identified as formally purely classical WCW spinor fields replace positive energy states and are analogous to pairs of initial and final, states and the crossing symmetry of quantum field theories gives the mathematical motivation for their introduction.
6. Quantum measurement theory can be seen as a theory of consciousness in ZEO. Conscious observer or self as a conscious entity becomes part of physics. ZEO gives up the assumption about unique universe of classical physics and restricts it to the perceptive field defined by CD.
In each quantum jump a re-creation of Universe occurs. Subjective experience time corresponds to state function reductions at fixed, passive bounary of CD leaving it invariant as well as state at it. The state at the opposite, active boundary changes and also its position changes so that CD increases state function by state function reduction doing nothing to the passive boundary. This gives rise to the experienced flow of geometric time since the distance between the tips of CD increases and the size of space-time surfaces in the quantum superposition increases. This sequence of state function reductions is counterpart for the unitary time evolution in ordinary quantum theory.
Self "dies" as the first state function reduction to the opposite boundary of CD meaning re-incarnation of self at it and a reversal of the arrow of geometric time occurs: CD size increases now in opposite time direction as the opposite boundary of CD recedes to the geometric past reduction by reduction.
Negentropy Maximization Principle (NMP) defines the variational principle of state function reduction. Density matrix of the subsystem is the universal observable and the state function reduction leads to its eigenspaces. Eigenspaces, not only eigenstates as usually.
Number theoretic entropy makes sense for the algebraic extensions of rationals and can be negative unlike ordinary entanglement entropy. NMP can therefore lead to a generation of NE if the entanglement correspond to a unitary entanglement matrix so that the density matrix of the final state is higher-D unit matrix. Another possibility is that entanglement matrix is algebraic but that its diagonalization in the algebraic extension of rationals used is not possible. This is expected to reduce the rate for the reduction since a phase transition increasing the size of extension is needed.
The weak form of NMP does not demand that the negentropy gain is maximum: this allow the conscious entity responsible for reduction to decide whether to increase maximally NE resources of the Universe or not. It can also allow larger NE increase than otherwise. This freedom brings the quantum correlates of ethics, moral, and good and evil. p-Adic length scale hypothesis and the existence of preferred p-adic primes follow from weak form of NMP and one ends up naturally to adelic physics.
The analogs of blackholes in TGD
Could blackholes have any analog in TGD? What about Hawking radiation? The following speculations are inspired by the above general vision.
1. Ordinary blackhole solutions are not appropriate in TGD. Interior space-time sheet of any physical object is replaced with an Euclidian space-time region. Also that of blackhole by perturbation argument based on the observation that if one requires that the radial component of blackhole metric is finite, the horizon becomes light-like 3-surface analogous to the light-like orbit of partonic 2-surface and the metric in the interior becomes Euclidian.
2. The analog of blackhole can be seen as a limiting case for ordinary astrophysical object, which already has blackhole like properties due to the presence of heff=n× h dark matter particles, which cannot appear in the same vertices with visible manner. Ideal analog of blackhole consist of dark matter only, and is assumed to satisfy the hgr=heff already discussed. It corresponds to region with a radius equal to Compton length for arbitrary particle R=GM/v0=rS/2v0, where rS is Schwartschild radius. Macroscopic quantum phase is in question since the Compton radius of particle does not depend on its mass. Blackhole limit would correspond to v0/c→ 1 and dark matter dominance. This would give R=rS/2. Naive expectation would be R=rS (maybe factor of two is missing somewhere: blame me!).
3. NMP implies that information cannot be lost in the formation of blackhole like state but tends to increase. Matter becomes totally dark and the NE with the partonic surfaces of external world is preserved or increases. The ingoing matter does not fall to a mass point but resides at the partonic 2-surface which can have arbitrarily large surface. It can have also wormholes connecting different regions of a spherical surface and in this manner increase its genus. NMP, negentropy , negentropic entanglement between heff=n× h dark matter systems would become the basic notions instead of second law and entropy.
does not diverge either and Einstein tensor and Ricci scalar vanish. This argument has been used in the firewall debates to claim that nothing special should occur as horizon is traversed. So: why light would rotate around it? No reason for this! The answer in TGD would be obvious: horizon is replaced for TGD analog of blackhole with a light-like 3-surface at which the induced metric becomes Euclidian. Horizon becomes analogous to light front carrying not only photons but all kinds of elementary particles. Particles do not fall inside this surface but remain at it!
5. The replacement of second law with NMP leads to ask whether a generalization of blackhole thermodynamics does make sense in TGD Universe. Since blackhole thermodynamics characterizes Hawking radiation, the generalization could make sense at least if there exist analog for the Hawking radiation. Note that also geometric variant of second law makes sense.
Could the analog of Hawking radiation be generated in the first state function reduction to the opposite boundary, and be perhaps be assigned with the sudden increase of radius of the partonic 2-surface defining the horizon? Could this burst of energy release the energy compensating the generation of gravitational binding energy? This burst would however have totally different interpretation: even gamma ray bursts from quasars could be considered as candidates for it and temperature would be totally different from the extremely low general relativistic Hawking temperature of order
TGR=[hbar/8π GM ] ,
which corresponds to an energy assignable to wavelength equal to 4π times Schwartschild radius. For Sun with Schwartschild radius rS=2GM=3 km one has TGR= 3.2× 10-11 eV.
One can of course have fun with formulas to see whether the generalizaton of blackhole thermodynamics assuming the replacement h→ hgr could make sense physically. Also the replacement rS→ R, where R is the real radius of the star will be made.
1. Blackhole temperature can be formally identified as surface gravity
T=(hgr/hbar) × [GM/2π R2] = [hgr/h] × [rS2/R2]× TGR = 1/[4π v0] [rS2/R2] .
For Sun with radius R= 6.96× 105 km one has T/m= 3.2× 10-11 giving T= 3× 10-2 eV for proton. This is by 9 orders higher than ordinary Hawking temperature. Amazingly, this temperature equals to room temperature! Is this a mere accident? If one takes seriously TGD inspired quantum biology in which quantum gravity plays a key role (see this) this does not seem to be the case. Note that for electron the temperature would correspond to energy 3/2× 10-5 eV which corresponds to 4.5 GHz frequency for ordinary Planck constant.
It must be however made clear that the value of v0 for dark matter could differ from that deduced assuming that entire gravitational mass is dark. For M→ MD= kM and v0→ k1/2v0 the orbital radii remain unchanged but the velocity of dark matter object at the orbit scales to k1/2v0. This kind of scaling is suggested by the fact that the value of hgr seems to be too large as compared by the identification of biophotons as decay results of dark photons with heff=hgr (some arguments suggest the value k≈ 2× 10-4).
Note that for the radius R=[rS/2v0π] the thermal energy exceeds the rest mass of the particle. For neutron stars this limit might be achieved.
2. Blackhole entropy
SGR= [A/4 hbar G]= 4π GM2/hbar=4π [M2/MPl2]
would be replaced with the negentropy for dark matter making sense also for systems containing both dark and ordinary matter. The negentropy N(m) associated with a flux tube of given type would be a fraction h/hgr from the total area of the horizon using Planck area as a unit:
N(m)=[h/hgr] × [A/4hbar G]= [h/hgr] × [R2/rS2] ×SGR = v0×[M/m]× [R2/rS2] .
The dependence on m makes sense since a given flux tube type characterized by mass m determining the corresponding value of hgr has its own negentropy and the total negentropy is the sum over the particle species. The negentropy of Sun is numerically much smaller that corresponding blackhole entropy.
3. Horizon area is proportional to (GM/v0)2∝ heff2 and should increase in discrete jumps by scalings of integer and be proportional to n2.
How does the analog of blackhole evolve in time? The evolution consists of sequences of repeated state function reductions at the passive boundary of CD followed by the first reduction to the opposite boundary of CD followed by a similar sequence. These sequences are analogs of unitary time evolutions. This defines the analog of blackhole state as a repeatedly re-incarnating conscious entity and having CD, whose size increases gradually. During given sequence of state function reductions the passive boundary has constant size. About active boundary one cannot say this since it corresponds to a superposition of quantum states.
The reduction sequences consist of life cycles at fixed boundary and the size of blackhole like state as of any state is expected to increase in discrete steps if it participates to cosmic expansion in average sense. This requires that the mass of blackhole like object gradually increases. The interpretation is that ordinary matter gradually transforms to dark matter and increases dark mass M= R/G.
Cosmic expansion is not observed for the sizes of individual astrophysical objects, which only co-move. The solution of the paradox is that they suddenly increase their size in state function reductions. This hypothesis allows to realize Expanding Earth hypothesis in TGD framework (see this). Number theoretically preferred scalings of blackhole radius come as powers of 2 and this would be the scaling associated with Expanding Earth hypothesis.
See the chapter Criticality and dark matter" or the article TGD view about black holes and Hawking radiation.
TGD view about blackholes and Hawking radiation: part I
The most recent revealation of Hawking was in Hawking radiation conference held in KTH Royal Institute of Technology in Stockholm. The title of the posting of Bee telling about what might have been revealed is "Hawking proposes new idea for how information might escape from black holes". Also Lubos has - a rather aggressive - blog post about the talk. A collaboration of Hawking, Andrew Strominger and Malcom Perry is behind the claim and the work should be published within few months.
The first part of posting gives a critical discussion of the existing approach to black holes and Hawking gravitation. The intention is to demonstrate that a pseudo problem following from the failure of General Relativity below black hole horizon is in question.
Is information lost or not in blackhole collapse?
The basic problem is that classically the collapse to blackhole seems to destroy all information about the matter collapsing to the blackhole. The outcome is just infinitely dense mass point. There is also a theorem of classical GRT stating that blackhole has no hair: blachole is characterized only by few conserved charges.
Hawking has predicted that blackhole loses its mass by generating radiation, which looks like thermal. As blackhole radiates its mass away, all information about the material which entered to the blackhole seems to be lost. If one believes in standard quantum theory and unitary evolution preserving the information, and also forgets the standard quantum theory's prediction that state function reductions destroy information, one has a problem. Does the information really disappear? Or is the GRT description incapable to cope with the situation? Could information find a new representation?
Superstring models and AdS/CFT correspondence have inspired the proposal that a hologram results at the horizon and this hologram somehow catches the information by defining the hair of the blackhole. Since the radius of horizon is proportional to the mass of blackhole, one can however wonder what happens to this information as the radius shrinks to zero when all mass is Hawking radiated out.
What Hawking suggests is that a new kind of symmetry known as super-translations - a notion originally introduced by Bondi and Metzner - could somehow save the situation. Andrew Strominger has recently discussed the notion. The information would be "stored to super-translations". Unfortunately this statement says nothing to me nor did not say to Bee and New Scientist reporter. The idea however seems to be that the information carried by Hawking radiation emanating from the blackhole interior would be caught by the hologram defined by the blackhole horizon.
Super-translation symmetry acts at the surface of a sphere with infinite radius in asymptotically flat space-times looking like empty Minkowski space in very distant regions. The action would be translations along sphere plus Poincare transformations.
What comes in mind in TGD framework is conformal transformations of the boundary of 4-D lightcone, which act as scalings of the radius of sphere and conformal transformations of the sphere. Translations however translate the tip of the light-cone and Lorentz transformations transform the sphere to an ellipsoid so that one should restrict to rotation subgroup of Lorentz group. Besides this TGD allows huge group of symplectic transformations of δ CD× CP2 acting as isometries of WCW and having structure of conformal algebra with generators labelled by conformal weights.
Sharpening of the argument of Hawking
The objection would be that in GRT horizon is no way special - it is just a coordinate singularity. Curvature tensor does not diverge either and Einstein tensor and Ricci scalar vanish. This argument has been used in the firewall debates to claim that nothing special should occur as horizon is traversed. Why light would rotate around it? I see no reason for this! The answer in TGD framework would be obvious: horizon is replaced for TGD analog of blackhole with a light-like 3-surface at which the induced metric becomes Euclidian. Horizon becomes analogous to light front carrying not only photons but all kinds of elementary particles. Particles do not fall inside this surface but remain at it!
What are the problems?
My fate is to be an aggressive dissident listened by no-one, and I find it natural to continue in the role of angry old man. Be cautious, I am arrogant, I can bite, and my bite is poisonous!
1. With all due respect to Big Guys, to me the problem looks like a pseudo problem caused basically by the breakdown of classical GRT. Irrespective of whether Hawking radiation is generated, the information about matter (apart from mass, and some charges) is lost if the matter indeed collapses to single infinitely dense point. This is of course very unrealistic and the question should be: how should we proceed from GRT.
Blackhole is simply too strong an idealization and it is no wonder that Hawking's calculation using blackhole metric as a background gives rise to blackbody radiation. One might hope that Hawking radiation is genuine physical phenomenon, and might somehow carry the information by being not genuinely thermal radiation. Here a theory of quantum gravitation might help. But we do not have it!
2. What do we know about blackholes? We know that there are objects, which can be well described by the exterior Schwartschild metric. Galactic centers are regarded as candidates for giant blackholes. Binary systems for which another member is invisible are candidates for stellar blackholes. One can however ask wether these candidates actually consist of dark matter rather than being blackholes. Unfortunately, we do not understand what dark matter is!
3. Hawking radiation is extremely weak and there is no experimental evidence pro or con. Its existence assumes the existence of blackhole, which presumably represents the failure of classical GRT. Therefore we might be seeing a lot of trouble and inspired heated debates about something, which does not exist at all! This includes both blackholes, Hawking radiation and various problems such as firewall paradox.
There are also profound theoretical problems.
1. Contrary to the intensive media hype during last three decades, we still do not have a generally accepted theory of quantum gravity. Super string models and M-theory failed to predict anything at fundamental level, and just postulate effective quantum field theory limit, which assumes the analog of GRT at the level of 10-D or 11-D target space to define the spontaneous compactification as a solution of this GRT type theory. Not much is gained.
AdS/CFT correspondence is an attempt to do something in absence of this kind of theory but involves 10- or 11- D blackholes and does not help much. Reality looks much simpler to an innocent non-academic outsider like me. Effective field theorizing allows intellectual laziness and many problems of recent day physics will be probably seen in future as being caused by this lazy approach avoiding attempts to build explicit bridges between physics at different scales. Something very similar has occurred in hadron physics and nuclear physics and one has kind of stable of Aigeias to clean up before one can proceed.
2. A mathematically well-defined notion of information is lacking. We can talk about thermodynamical entropy - single particle observable - and also about entanglement entropy - basically a 2-particle observable. We do not have genuine notion of information and second law predicts that the best that one can achieve is no information at all!
Could it be that our view about information as single particle characteristic is wrong? Could information be associated with entanglement and be 2-particle characteristic? Could information reside in the relationship of object with the external world, in the communication line? Not inside blackhole, not at horizon but in the entanglement of blackhole with the external world.
3. We do not have a theory of quantum measurement. The deterministic unitary time evolution of Schrödinger equation and non-deterministic state function reduction are in blatant conflict. Copenhagen interpretation escapes the problem by saying that no objective reality/realities exist. Easy trick once again! A closely related Pandora's box is that experienced time and geometric time are very different but we pretend that this is not the case.
The only way out is to bring observer part of quantum physics: this requires nothing less than quantum theory of consciousness. But the gurus of theoretical physics have shown no interest to consciousness. It is much easier and much more impressive to apply mechanical algorithms to produce complex formulas. If one takes consciousness seriously, one ends up with the question about the variational principle of consciousness. Yes, your guess was correct! Negentropy Maximization Principle! Conscious experience tends to maximize conscious information gain. But how information is represented?
In the second part I will discuss TGD view about blackholes and Hawking radiation.
Tuesday, August 25, 2015
Field equations as conservation laws, Frobenius integrability conditions, and a connection with quaternion analyticity
The following represents qualitative picture of field equations of TGD trying to emphasize the physical aspects. What is new is the discussion of the possibility that Frobenius integrability conditions are satisfied and correspond to quaternion analyticity.
1. Kähler action is Maxwell action for induced Kähler form and metric expressible in terms of imbedding space coordinates and their gradients. Field equations reduce to those for imbedding space coordinates defining the primary dynamical variables. By GCI only four of them are independent dynamical variables analogous to classical fields.
2. The solution of field equations can be interpreted as a section in fiber bundle. In TGD the fiber bundle is just the Cartesian product X4× CD× CP2 of space-time surface X4 and causal diamond CD× CP2. CD is the intersection of future and past directed light-cones having two light-like boundaries, which are cone-like pieces of light-boundary δ M4+/-× CP2. Space-time surface serves as base space and CD× CP2 as fiber. Bundle projection Π is the projection to the factor X4. Section corresponds to the map x→ hk(x) giving imbedding space coordinates as functions of space-time coordinates. Bundle structure is now trivial and rather formal.
By GCI one could also take suitably chosen 4 coordinates of CD× CP2 as space-time coordinates, and identify CD× CP2 as the fiber bundle. The choice of the base space depends on the character of space-time surface. For instance CD, CP2 or M2× S2 (S2 a geodesic sphere of CP2), could define the base space. The bundle projection would be projection from CD× CP2 to the base space. Now the fiber bundle structure can be non-trivial and make sense only in some space-time region with same base space.
3. The field equations derived from Kähler action must be satisfied. Even more: one must have a preferred extremal of Kähler action. One poses boundary conditions at the 3-D ends of space-time surfaces and at the light-like boundaries of CD× CP2.
One can fix the values of conserved Noether charges at the ends of CD (total charges are same) and require that the Noether charges associated with a sub-algebra of super-symplectic algebra isomorphic to it and having conformal weights coming as n-ples of those for the entire algebra, vanish. This would realize the effective 2-dimensionality required by SH. One must pose boundary conditions also at the light-like partonic orbits. So called weak form of electric-magnetic duality is at least part of these boundary conditions.
It seems that one must restrict the conformal weights of the entire algebra to be non-negative r≥ 0 and those of subalgebra to be positive: mn>0. The condition that also the commutators of sub-algebra generators with those of the entire algebra give rise to vanishing Noether charges implies that all algebra generators with conformal weight m≥ n vanish so the dynamical algebra becomes effectively finite-dimensional. This condition generalizes to the action of super-symplectic algebra generators to physical states.
M4 time coordinate cannot have vanishing time derivative dm0/dt so that four-momentum is non-vanishing for non-vacuum extremals. For CP2 coordinates time derivatives dsk/dt can vanish and for space-like Minkowski coordinates dmi/dt can be assumed to be non-vanishing if M4 projection is 4-dimensional. For CP2 coordinates dsk/dt=0 implies the vanishing of electric parts of induced gauge fields. The non-vacuum extremals with the largest conformal gauge symmetry (very small n) would correspond to cosmic string solutions for which induced gauge fields have only magnetic parts. As n increases, also electric parts are generated. Situation becomes increasingly dynamical as conformal gauge symmetry is reduced and dynamical conformal symmetry increases.
4. The field equations involve besides imbedding space coordinates hk also their partial derivatives up to second order. Induced Kähler form and metric involve first partial derivatives ∂αhk and second fundamental form appearing in field equations involves second order partial derivatives ∂αβhk.
Field equations are hydrodynamical, in other worlds represent conservation laws for the Noether currents associated with the isometries of M4× CP2. By GCI there are only 4 independent dynamical variables so that the conservation of m≤ 4 isometry currents is enough if chosen to be independent. The dimension m of the tangent space spanned by the conserved currents can be smaller than 4. For vacuum extremals one has m= 0 and for massless extremals (MEs) m= 1! The conservation of these currents can be also interpreted as an existence of m≤ 4 closed 3-forms defined by the duals of these currents.
5. The hydrodynamical picture suggests that in some situations it might be possible to assign to the conserved currents flow lines of currents even globally. They would define m≤ 4 global coordinates for some subset of conserved currents (4+8 for four-momentum and color quantum numbers). Without additional conditions the individual flow lines are well-defined but do not organize to a coherent hydrodynamic flow but are more like orbits of randomly moving gas particles. To achieve global flow the flow lines must satisfy the condition dφA/dxμ= kABJBμ or dφA= kABJB so that one can special of 3-D family of flow lines parallel to kABJB at each point - I have considered this kind of possibility in detail earlier but the treatment is not so general as in the recent case.
Frobenius integrability conditions follow from the condition d2φA=0= dkAB∧ JB+ kABdJB=0 and implies that dJB is in the ideal of exterior algebra generated by the JA appearing in kABJB. If Frobenius conditions are satisfied, the field equations can define coordinates for which the coordinate lines are along the basis elements for a sub-space of at most 4-D space defined by conserved currents. Of course, the possibility that for preferred extremals there exists m≤ 4 conserved currents satisfying integrability conditions is only a conjecture.
It is quite possible to have m<4. For instance for vacuum extremals the currents vanish identically For MEs various currents are parallel and light-like so that only single light-like coordinate can be defined globally as flow lines. For cosmic strings (cartesian products of minimal surfaces X2 in M4 and geodesic spheres S2 in CP2 4 independent currents exist). This is expected to be true also for the deformations of cosmic strings defining magnetic flux tubes.
6. Cauchy-Riemann conditions in 2-D situation represent a special case of Frobenius conditions. Now the gradients of real and imaginary parts of complex function w=w(z)= u+iv define two conserved currents by Laplace equations. In TGD isometry currents would be gradients apart from scalar function multipliers and one would have generalization of C-R conditions. In citeallb/prefextremals,twistorstory I have considered the possibility that the generalization of Cauchy-Riemann-Fuerter conditions could define quaternion analyticity - having many non-equivalent variants - as a defining property of preferred extremals. The integrability conditions for the isometry currents would be the natural physical formulation of CRF conditions. Different variants of CRF conditions would correspond to varying number of independent conserved isometry currents.
7. This picture allows to consider a generalization of the notion of solution of field equation to that of integral manifold. If the number of independent isometry currents is smaller than 4 (possibly locally) and the integrability conditions hold true, lower-dimensional sub-manifolds of space-time surface define integral manifolds as kind of lower-dimensional effective solutions. Genuinely lower-dimensional solutions would of course have vanishing (g41/2) and vanishing Kähler action.
String world sheets can be regarded as 2-D integral surfaces. Charged (possibly all) weak boson gauge fields vanish at them since otherwise the electromagnetic charge for spinors would not be well-defined. These conditions force string world sheets to be 2-D in the generic case. In special case 4-D space-time region as a whole can satisfy these conditions. Well-definedness of Kähler-Dirac equation demands that the isometry currents of Kähler action flow along these string world sheets so that one has integral manifold. The integrability conditions would allow 2<m≤ n integrable flows outside the string world sheets, and at string world sheets one or two isometry currents would vanish so that the flows would give rise 2-D independent sub-flow.
8. The method of characteristics is used to solve hyperbolic partial differential equations by reducing them to ordinary differential equations. The (say 4-D) surface representing the solution in the field space has a foliation using 1-D characteristics. The method is especially simple for linear equations but can work also in the non-linear case. For instance, the expansion of wave front can be described in terms of characteristics representing light rays. It can happen that two characteristics intersect and a singularity results. This gives rise to physical phenomena like caustics and shock waves.
In TGD framework the flow lines for a given isometry current in the case of an integrable flow would be analogous to characteristics, and one could also have purely geometric counterparts of shockwaves and caustics. The light-like orbits of partonic 2-surface at which the signature of the induced metric changes from Minkowskian to Euclidian might be seen as an example about the analog of wave front in induced geometry. These surfaces serve as carriers of fermion lines in generalized Feynman diagrams. Could one see the particle vertices at which the 4-D space-time surfaces intersect along their ends as analogs of intersections of characteristics - kind of caustics? At these 3-surfaces the isometry currents should be continuous although the space-time surface has "edge".
For details see the chapter Recent View about Kähler Geometry and Spin Structure of "World of Classical Worlds" of "Quantum physics as infinite-dimensional geometry" or the article Could One Define Dynamical Homotopy Groups in WCW?.
Saturday, August 22, 2015
Does color deconfinement really occur?
Bee had a nice blog posting related to the origin of hadron masses and the phase transition from color confinement to quark-gluon plasma involving also restoration of chiral symmetry in the sigma model description. In the ideal situation the outcome should be a black body spectrum with no correlations between radiated particles.
The situation is however not this. Some kind of transition occurs and produces a phase, which has much lower viscosity than expected for quark-gluon plasma. Transition occurs also in much smoother manner than expected. And there are strong correlations between opposite charged particles - charge separation occurs. The simplest characterization for these events would be in terms of decaying strings emitting particles of opposite charge from their ends. Conventional models do not predict anything like this.
Some background
The masses of current quarks are very small - something like 5-20 MeV for u and d. These masses explain only a minor fraction of the mass of proton. The old fashioned quark model assumed that quark masses are much bigger: the mass scale was roughly one third of nucleon mass. These quarks were called constituent quarks and - if they are real - one can wonder how they relate to current quarks.
Sigma model provide a phenomenological decription for the massivation of hadrons in confined phase. The model is highly analogous to Higgs model. The fields are meson fields and baryon fields. Now neutral pion and sigma meson develop vacuum expectation values and this implies breaking of chiral symmetry so that nucleon become massive. The existence of sigma meson is still questionable.
In a transition to quark-gluon plasma one expects that mesons and protons disappear totally. Sigma model however suggests that pion and proton do not disappear but become massless. Hence the two descriptions might be inconsistent.
The authors of the article assumes that pion continues to exist as a massless particle in the transition to quark gluon plasma. The presence of massless pions would yield a small effect at the low energies at which massless pions have stronger interaction with magnetic field as massive ones. The existence of magnetic wave coherent in rather large length scale is an additional assumption of the model: it corresponds to the assumption about large heff in TGD framework, where color magnetic fields associated with M89 meson flux tubes replace the magnetic wave.
In TGD framework sigma model description is at best a phenomenological description as also Higgs mechanism. p-Adic thermodynamics replaces Higgs mechanism and the massivation of hadrons involves color magnetic flux tubes connecting valence quarks to color singles. Flux tubes have quark and antiquark at their ends and are mesonlike in this sense. Color magnetic energy contributes most of the mass of hadron. Constituent quark would correspond to valence quark identified as current quark plus the associated flux tube and its mass would be in good approximation the mass of color magnetic flux tube.
There is also an analogy with sigma model provided by twistorialization in TGD sense. One can assign to hadron (actually any particle) a light-like 8-momentum vector in tangent space M8=M4× E4 of M4× CP2 defining 8-momentum space. Massless implies that ordinary mass squared corresponds to constant E4 mass which translates to a localization to a 3-sphere in E4. This localization is analogous to symmetry breaking generating a constant value of π0 field proportional to its mass in sigma model.
An attempt to understand charge asymmetries in terms of charged magnetic wave and charge separation
One of the models trying to explain the charge asymmetries is in terms of what is called charged magnetic wave effect and charge separation effect related to it. The experiment discussed by Bee attempts to test this model.
1. So called chiral magnetic wave effect and charge separation effects are proposed as an explanation for the the linear dependence of the asymmetry of so called elliptic flow on charge asymmetry. Conventional models explain neither the charge separation nor this dependence. Chiral magnetic wave would be a coherent magnetic field generated by the colliding nuclei in a relatively long scale, even the length scale of nuclei.
2. Charged pions interact with this magnetic field. The interaction energy is roughly h× eB/E, where E is the energy of pion. In the phase with broken chiral symmetry the pion mass is non-vanishing and at low energy one has E=m in good approximation. In chirally symmetric phase pion is massless and magnetic interaction energy becomes large a low energies. This could serve as a signature distginguishing between chirally symmetric and asymmetric phases.
3. The experimenters try to detect this difference and report slight evidence for it. This is change of the charge asymmetry of so called elliptic flow for positively and negatively charged pions interpreted in terms of charge separation fluctuation caused by the presence of strong magnetic field assumed to lead to separation of chiral charges (left/righ handedness). The average velocities of the pions are different and average velocity depends azimuthal angle in the collision plane: second harmonic is in question (say sin(2φ)).
In TGD framework the explanation of the un-expected behavior of should-be quark-gluon plasma is in terms of M89 hadron physics.
1. A phase transition indeed occurs but means a phase transition transforming the quarks of the ordinary M107 hadron physics to those of M89 hadron physics. They are not free quarks but confined to form M89 mesons. M89 pion would have mass about 135 GeV. A naive scaling gives half of this mass but it seems unfeasible that pion like state with this mass could have escaped the attention - unless of course the unexpected behavior of quark gluon plasma demonstrates its existence! Should be easy for a professional to check. Thus a phase transition would yield a scaled up hadron physics with mass scale by a factor 512 higher than for the ordinary hadron physics.
2. Stringy description applies to the decay of flux tubes assignable to the M89 mesons to ordinary hadrons. This explains charge separation effect and the deviation from the thermal spectrum.
3. In the experiments discussed in the article the cm energy for nucleon-nucleon system associated with the colliding nuclei varied between 27-200 GeV so that the creation of even on mass shell M89 pion in single collision of this kind is possible at highest energies. If several nucleons participate simultaneosly even many-pion states are possible at the upper end of the interval.
4. These hadrons must have large heff=n× h since collision time is roughly 5 femtoseconds, by a factor about 500 (not far from 512!) longer than the time scale associated with their masses if M89 pion has the proposed mass of 135 MeV for ordinary Planck constant and scaling factor 2× 512 instead of 512 in principle allowed by p-adic length scale hypothesis. There are some indications for a meson with this mass. The hierarchy of Planck constants allows at quantum criticality to zoom up the size of much more massive M89 hadrons to nuclear size! The phase transition to dark M89 hadron physics could take place in the scale of nucleus producing several M89 pions decaying to ordinary hadrons.
5. The large value of heff would mean quantum coherence in the scale of nucleus explaining why the value of viscosity was much smaller than expected for quark gluon plasma. The expected phase transition was also much smoother than expected. Since nuclei are many-nucleon systems and the Compton wavelength of M89 pion would be of order nucleus size, one expects that the phase transition can take place in a wide collision energy range. At lower energies several nucleon pairs could provide energy to generate M89 pion. At higher energies even single nucleon pair could provide the energy. The number of M89 pions should therefore increase with nucleon-nucleon collision energy, and induce the increase of charge asymmetry and strength of the charge asymmery of the elliptic flow.
6. Hydrodynamical behavior is essential in order to have low viscosity classically. Even more, the hydrodynamics had better to be that of an ideal liquid. In TGD framework the field equations have hydrodynamic character as conservation laws for currents associated with various isometries of imbedding space. The isometry currents define flow lines. Without further conditions the flow lines do not however integrate to a coherent flow: one has something analogous to gas phase rather than liquid so that the mixing induced by the flow cannot be described by a smooth map.
To achieve this given isometry flow must make sense globally - that is to define coordinate lines of a globally defined coordinate ("time" along flow lines). In this case one can assign to the flow a continuous phase factor as an order parameter varying along the flow lines. Super-conductivity is an example of this. The so called Frobenius conditions guarantee this at least the preferred extremals could have this complete integrability property making TGD an integrable theory (see the appendix of the article at my homepage). In the recent case, the dark flux tubes with size scale of nucleus would carry ideal hydrodynamical flow with very low viscosity.
See the chapter New Particle Physics Predicted by TGD: Part I or the article Does color deconfinement really occur?.
Wednesday, August 19, 2015
Could one define dynamical homotopy groups in WCW?
I learned that Agostino Prastaro has done highly interesting work with partial differential equations, also those assignable to geometric variational principles such as Kähler action in TGD. I do not understand the mathematical details but the key idea is a simple and elegant generalization of Thom's cobordism theory, and it is difficult to avoid the idea that the application of Prastaro's idea might provide insights about the preferred extremals, whose identification is now on rather firm basis.
One could also consider a definition of what one might call dynamical homotopy groups as a genuine characteristics of WCW topology. The first prediction is that the values of conserved classical Noether charges correspond to disjoint components of WCW. Could the natural topology in the parameter space of Noether charges zero modes of WCW metric) be p-adic and realize adelic physics at the level of WCW? An analogous conjecture was made on basis of spin glass analogy long time ago. Second surprise is that the only the 6 lowest dynamical homotopy/homology groups of WCW would be non-trivial. The Kähler structure of WCW suggets that only Π0, Π2, and Π4 are non-trivial.
The interpretation of the analog of Π1 as deformations of generalized Feynman diagrams with elementary cobordism snipping away a loop as a move leaving scattering amplitude invariant conforms with the number theoretic vision about scattering amplitude as a representation for a sequence of algebraic operation can be always reduced to a tree diagram. TGD would be indeed topological QFT: only the dynamical topology would matter.
For details see the chapter Recent View about K\"ahler Geometry and Spin Structure of "World of Classical Worlds" of "Quantum physics as infinite-dimensional geometry" or the article Could One Define Dynamical Homotopy Groups in WCW?.
Tuesday, August 18, 2015
Hydrogen sulfide superconducts at -70 degrees Celsius!
The newest news is that hydrogen sulfide - the compound responsible for the smell of rotten eggs - conducts electricity with zero resistance at a record high temperature of 203 Kelvin (–70 degrees C), reports a paper published in Nature. This super-conductor however suffers from a serious existential crisis: it behaves very much like old fashioned super-conductor for which superconductivity is believed to be caused by lattice vibrations and is therefore not allowed to exist in the world of standard physics! To be or not to be!
TGD Universe allows however all flowers to bloom: the interpretation is that the mechanism is large enough value of heff=n×h implying that critical temperature scales up. Perhaps it is not a total accident that hydrogen sulfide H2S - chemically analogous to water - results from the bacterial breakdown of organic matter, which according to TGD is high temperature super-conductor at room temperature and mostly water, which is absolutely essential for the properties of living matter in TGD Universe.
See the earlier posting about pairs magnetic flux tubes carrying dark electrons of Cooper pair as an explanation of high Tc (and maybe also of low Tc) superconductivity.
About negentropic entanglement as analog of an error correction code
In classical computation, the simplest manner to control errors is to take several copies of the bit sequences. In quantum case no-cloning theorem prevents this. Error correcting codes (\url code n information qubits to the entanglement of N>n physical qubits. Additional contraints represents the subspace of n-qubits as a lower-dimensional sub-space of N qubits. This redundant representation is analogous to the use of parity bits. The failure of the constraint to be satisfied tells that the error is present and also the character of error. This makes possible the automatic correction of the error is simple enough - such as the change of the phase of spin state or or spin flip.
Negentropic entanglement (NE) obviously gives rise to a strong reduction in the number of states of tensor product. Consider a system consisting of two entangled systems consisting of N1 and N2 spins. Without any constraints the number of states in state basis is 2N1× 2N2 and one as N1+N2 qubits. The elements of entanglement matrix can be written as EA,B, A== ⊗i=1N1 (mi,si), B== ⊗k=1N2 (mk,sk) in order to make manifest the tensor product structure. For simplicity one can consider the situation N1=N2=N.
The un-normalized general entanglement matrix is parametrized by 2× 22N independent real numbers with each spin contributing two degrees of freedom. Unitary entanglement matrix is characterized by 22N real numbers. One might perhaps say that one has 2N real bits instead of almost 2N+1 real qubits. If the time evolution according to ZEO respects the negentropic character of entanglement, the sources of errors are reduced dramatically.
The challenge is to understand what kind of errors NE eliminates and how the information bits are coded by it. NE is respected if the errors act as unitary transformations E→ UEU of the unitary entanglement matrix. One can consider two interpretations.
1. The unitary automorphisms leave information content unaffected only if they commute with E. In this case unitary automorphisms acting non-trivially would give rise genuine errors and an error correction mechanism would be needed and would be coded to quantum computer program.
2. One can also consider the possibility that the unitary automorphisms do not affect the information content so that the diagonal form of entanglement matrix coded by N phases would carry of information. Clearly, the unitary automorphisms would act like gauge transformations. Nature would take care that no errors emerge. Of course, more dramatic things are in principle allowed by NMP: for instance, the unitary entanglement matrix could reduce to a tensor product of several unitary matrices. Negentropy could be transferred from the system and is indeed transferred as the computation halts.
By number theoretic universality the diagonalized entanglement matrix would be parametrized by N roots of unity with each having n possible values so that nN different NEs would be obtained and information storage capacity would be I=log(n)/log(2) × N bits for n=2k one would have k× N bits. Powers of two for n are favored. Clearly the option for which only the eigenvalues of E matter, looks more attractive realization of entanglement matrices. If overall phase of E does not matter as one expects, the number of full bits is k× N-1. This option looks more attractive realization of entanglement matrices.
In fact, Fermat polygons for which cosine and sine for the angle defining the polygon are expressible by iterating square root besides basic arithmetic operations for rationals (ruler and compass construction geometrically) correspond to integers, which are products of a power of two and of different Fermat primes Fn=22n+1. l
This picture can be related to much bigger picture.
1. In TGD framework number theoretical universality requires discretization in terms of algebraic extension of rationals. This is not performed at space-time level but for the parameters characterizing space-time surfaces at the level of WCW. Strong form of holography is also essential and allows to consider partonic 2-surfaces and string world sheets as basic objects. Number theoretical universality (adelic physics) forces a discretization of phases and number theoretically allowed phases are roots of unity defined by some algebraic extension of rationals. Discretization can be also interpreted in terms of finite measurement resolution. Notice that the condition that roots of unity are in question realizes finite measurement resolution in the sense that errors have minimum size and are thus detectable.
2. Hierarchy of quantum criticalities corresponds to a fractal inclusion hierarchy of isomorphic sub-algebras of the super-symplectic algebra acting as conformal gauge symmetries. The generators in the complement of this algebra can act as dynamical symmetries affecting the physical states. Infinite hierarchy of gauge symmetry breakings is the outcome and the weakening of measurement resolution would correspond to the reduction in the size of the broken gauge group. The hierarchy of quantum criticalities is accompanied by the hierarchy of measurement resolutions and hierarchy of effective Planck constants heff=n× h.
3. These hierarchies are argued to correspond to the hierarchy of inclusions for hyperfinite factors of type II1 labelled by quantum phases and quantum groups. Inclusion defines finite measurement resolution since included sub-algebra does induce observable effects on the state. By Mac-Kay correspondence the hierarchy of inclusions is accompanied by a hierarchy of simply laced Lie groups which get bigger as one climbs up in the hierarchy. There interpretation as genuine gauge groups does make sense since their sizes should be reduced. An attractive possibility is that these groups are factor groups G/H such that the normal subgroup H (necessarily so) is the gauge group and indeed gets smaller and G/H is the dynamical group identifiable as simply laced group which gets bigger. This would require that both G and H are infinite-dimensional groups.
An interesting question is how they relate to the super-symplectic group assignable to "light-cone boundary" δ M4+/-× CP2. I have proposed this interpretation in the context of WCW geometry earlier.
4. Here I have spoken only about dynamical symmetries defined by discrete subgroups of simply laced groups. I have earlier considered the possibility that discrete symmetries provide a description of finite resolution, which would be equivalent with quantum group description.
Summarizing, these arguments boil down to the conjecture that discrete subgroups of these groups act as effective symmetry groups of entanglement matrices and realize finite quantum measurement resolution. A very deep connection between quantum information theory and these hierarchies would exist.
Gauge invariance has turned out to be a fundamental symmetry principle, and one can ask whether unitary entanglement matrices assuming that only the eigenvalues matter, could give rise to a simulation of discrete gauge theories. The reduction of the information to that provided by the diagonal form be interpreted as an analog of gauge invariance?
1. The hierarchy of inclusions of hyper-finite factors of type II1 suggests strongly a hierarchy of effective gauge invariances characterizing measurement resolution realized in terms of hierarchy of normal subgroups and dynamical symmetries realized as coset groups G/H. Could these effective gauge symmetries allow to realize unitary entanglement matrices invariant under these symmetries.
2. A natural parametrization for single qubit errors is as rotations of qubit. If the error acts as a rotation on all qubits, the rotational invariance of the entanglement matrix defining the analog of S-matrix is enough to eliminate the effect on information processing.
Quaternionic unitary transformations act on qubits as unitary rotations. Could one assume that complex numbers as the coefficient field of QM is effectively replaced with quaternions? If so, the multiplication by unit quaternion for states would leave the physics and information content invariant just like the multiplication by a complex phase leaves it invariant in the standard quantum theory.
One could consider the possibility that quaternions act as a discretized version of local gauge invariance affecting the information qubits and thus reducing further their number and thus also errors. This requires the introduction of the analog of gauge potential and coding of quantum information in terms of SU(2) gauge invariants. In discrete situation gauge potential would be replaced with a non-integrable phase factors along the links of a lattice in lattice gauge theory. In TGD framework the links would correspond the fermionic strings connecting partonic two-surfaces carrying the fundamental fermions at string ends as point like particles. Fermionic entanglement is indeed between the ends of these strings.
3. Since entanglement is multilocal and quantum groups accompany the inclusion, one cannot avoid the question whether Yangian symmetry crucial for the formulation of quantum TGD \citeallb/twistorstory could be involved.
For details see the chapter Negentropy Maximization Principleor the article Quantum Measurement and Quantum Computation in TGD Universe
Sunday, August 16, 2015
Sleeping Beauty Problem
Lubos wrote polemically about Sleeping Beauty Problem. The procedure is as follows.
Sleeping Beauty is put to sleep and coin is tossed. If the coin comes up heads, Beauty will be awakened and interviewed only on Monday. If the coin comes up tails, she will be awakened and interviewed on both Monday and Tuesday. On Monday she will be put into sleep by amnesia inducing drug. In either case, she will be awakened on Wednesday without interview and the experiment ends. Any time Sleeping Beauty is awakened and interviewed, she is asked, "What is your belief now for the proposition that the coin landed heads?" No other communications are allowed so that the Beauty does not know whether it is Monday or Tuesday.
The question is about the belief of the Sleeping Beauty on basis of the information she has, not about the actual probability that the coined landed heads. If one wants to debate one imagine oneself to the position of Sleeping Beauty. There are two basic debating camps, halfers and thirders.
1. Halfers argue that the outcome of coin tossing cannot in any manner depend on future events and one has have P(Heads)= P(Tails)=1/2 just from the fact that that the coin is fair. To me this view is obvious. Lubos has also this view. I however vaguely remember that years ago, when first encountering this problem, I was ready to take the thirder view seriously.
2. Thirders argue in the following manner using conditional probabilities. The conditional probability P(Tails|Monday) =P(Head|Monday) (P(X/Y) denotes probability for X assuming Y) and from the basic formula for the conditional probabilities stating P(X|Y)= P(X and Y)P(Y) and from P(Monday)= P(Tuesday)=1/2 (this actually follows from P(Heads)= P(Tail)=1/2 in the experiment considered!) , one obtains P(Tails and Tuesday)= P(Tails and Monday).
Furthermore, one also has P(Tails and Monday)= P(Heads and Monday) (again from P(Heads)= P(Tails)=1/2!) giving
P(Tails and Tuesday)= P(Tails and Monday)=P(Heads and Monday). Since these events are independent for one trial and one of them must occur, each probability must equal to 1/3. Since "Heads" implies that the day is Monday, one has P(Heads and Monday)= P(Heads)=1/3 in conflict with P(Heads)=1/2 used in the argument. To me this looks like a paradox telling that some implicit assumption about probabilities in relation to time is wrong.
To my opinion the basic problem in the argument of thirders is there assumption that events occurring at different times can form a set of independent events. Also the difference between experienced and geometric time is involved in an essential manner when one speaks about amnesia.
When one speaks about independent events and their probabilities in physics they are must be causally independent and occur at the same moment of time. This is crucial in the application of probability theory in quantum theory and also classical theory. If time would not matter, one should be able to replace time-line with space-like line - say x-axis. The counterparts of Monday, Tuesday, and Wednesday can be located to x-axis with a mutual distance of say one meter. One cannot however realize the experimental situation since the notion of space-like amnesia does not make sense! Or crystallizing it: independent events must have space-like separation. The arrow of time is also essential. For the conditional probabilitys P(X|Y) used above X occurs before Y and this breaks the standard arrow of time.
This clearly demonstrates that philosophy and mathematics cannot be separated from physics and that the notion of time should be fundamental issued both in philosophy, mathematics and physics!
Friday, August 14, 2015
About quantum measurement and quantum computation in TGD Universe
During years I have been thinking how quantum computation could be carried out in TGD Universe (see this). There are considerable deviations from the standard view. Zero Energy Ontology (ZEO), weak form of NMP dictating the dynamics of state function reduction, negentropic entanglement (NE), and hierarchy of Planck constants define the basic differences between TGD based and standard quantum measurement theory. TGD suggests also the importance of topological quantum computation (TQC) like processes with braids represented as magnetic flux tubes/strings along them.
The natural question that popped in my mind was how NMP and Zero Energy Ontology (ZEO) could affect the existing view about TQC. The outcome was a more precise view about TQC. The basic observation is that the phase transition to dark matter phase reduces dramatically the noise affecting quantum quits. This together with robustness of braiding as TQC program raises excellent hopes about TQC in TGD Universe. The restriction to negentropic space-like entanglement (NE) defined by a unitary matrix is something new but does not seem to have any fatal consequences as the study of Shor's algorithm shows.
NMP strongly suggests that when a pair of systems - the ends of braid - suffer state function reduction, the NE must be transferred somehow from the system. How? The model for quantum teleportation allows to identify a possible mechanism allowing to achieve this. This mechanism could be fundamental mechanism of information transfer also in living matter and phosphorylation could represent the transfer of NE according to this mechanism: the transfer of metabolic energy would be at deeper level transfer of negentropy. Quantum measurements could be actually seen as transfer of negentropy at deeper level.
Thursday, August 13, 2015
Flux tube description seems to apply also to low Tc superconductivity
Discussions with Hans Geesink have inspired sharpening of the TGD view about bio-superconductivity (bio-SC), high Tc superconductivity (SC) and relate the picture to standard descriptions in a more detailed manner. In fact, also standard low temperature super-conductivity modelled using BCS theory could be based on the same universal mechanism involving pairs of magnetic flux tubes possibly forming flattened square like closed flux tubes and members of Cooper pairs residing at them.
A brief summary about strengths and weakness of BCS theory
First I try to summarise what patent reminds about BCS theory.
1. BCS theory is successful in 3-D superconductors and explains a lot: supracurrent, diamagnetism, and thermodynamics of the superconducting state, and it has correlated many experimental data in terms of a few basic parameters.
2. BCS theory has also failures.
1. The dependence on crystal structure and chemistry is not well-understood: it is not possible to predict, which materials are super-conducting and which are not.
2. High-Tc SC is not understood. Antiferromagnetism is known to be important. The quite recent experiment demonstrates conductivity- maybe even conductivity - in topological insulator in presence of magnetic field (see
this). This is compete paradox and suggests in TGD framework that the flux tubes of external magnetic field serve as the wires (see previous posting).
3. BCS model based on crystalline long range order and k-space (Fermi sphere). BCS-difficult materials have short range structural order: amorphous alloys, SC metal particles 0-down to 50 Angstroms (lipid layer of cell membrane) transition metals, alloys, compounds. Real space description rather than k-space description based on crystalline order seems to be more natural. Could it be that the description of electrons of Cooper pair is not correct? If so, k-space and Fermi sphere would be only appropriate description of ordinary electrons needed to model the transition to to super-conductivity? Super-conducting electrons could require different description.
4. Local chemical bonding/real molecular description has been proposed. This is of course very natural in standard physics framework since the standard view about magnetic fields does not provide any ideas about Cooper pairing and magnetic fields are only a nuisance rather than something making SC possible. In TGD framework the situation is different.
TGD based view about SC
TGD proposal for high Tc SC and bio-SC relies on many-sheeted space-time and TGD based view about dark matter as heff=n× h phase of ordinary matter emerging at quantum criticality (see this).
Pairs of dark magnetic flux tubes would be the wires carrying dark Cooper pairs with members of the pair at the tubes of the pair. If the members of flux tube pair carry opposite B:s, Cooper pairs have spin 0. The magnetic interaction energy with the flux tube is what determines the critical temperature. High Tc superconductivity, in particular the presence of two critical temperatures can be understood. The role of anti-ferromagnetism can be understood.
TGD model is clearly x-space model: dark flux tubes are the x-space concept. Momentum space and the notion of Fermi sphere are certainly useful in understanding the transformation ordinary lattice electrons to dark electrons at flux tubes but the super conducting electron pairs at flux tubes would have different description.
Now come the heretic questions.
1. Do the crystal structure and chemistry define the (only) fundamental parameters in SC? Could the notion of magnetic body - which of course can correlate with crystal structure and chemistry - equally important or even more important notion?
2. Could also ordinary BCS SC be based on magnetic flux tubes? Is the value of heff=n× h only considerably smaller so that low temperatures are required since energy scale is cyclotron energy scale given by E= heff=n× fc, fc = eB/me. High Tc SC would only have larger heff and bio-superconductivity even larger heff!
3. Could it be that also in low Tc SC there are dark flux tube pairs carrying dark magnetic fields in opposite directions and Cooper pairs flow along these pairs? The pairs could actually form closed loops: kind of flattened O:s or flattened squares.
One must be able to understand Meissner effect. Why dark SC would prevent the penetration of the ordinary magnetic field inside superconductor?
1. Could Bext actually penetrate SC at its own space-time sheet. Could opposite field Bind at its own space-time sheet effectively interfere it to zero? In TGD this would mean generation of space-time sheet with Bind=-Bext so that test particle experiences vanishing B. This is obviously new. Fields do not superpose: only the effects caused by them superpose.
Could dark or ordinary flux tube pairs carrying Bind be created such that the first flux tube portion Bind in the interior cancels the effect of Bext on charge carriers. The return flux of the closed flux tube of Bind would run outside SC and amplify the detected field Bext outside SC. Just as observed.
2. What happens, when Bext penetrates to SC? heff→ h must take place for dark flux tubes whose cross-sectional area and perhaps also length scale down by heff and field strength increases by heff. If also the flux tubes of Bind are dark they would reduce in size in the transition heff→ h by 1/heff factor and would remain inside SC! Bext would not be screened anymore inside superconductor and amplified outside it! The critical value of Bext would mean criticality for this heff → h phase transition.
3. Why and how the phase transition destroying SC takes place? Is it energetically impossible to build too strong Bind? So that effective field Beff=Bdark+ Bind+Bext experienced by electrons is reduced so that also the binding energy of Cooper pair is reduced and it becomes thermally unstable. This in turn would mean that Cooper pairs generating the dark Bdark disappear and also Bdark disappears. SC disappears.
See the chapter Quantum model for bio-superconductivity: II
|
4d2b29636dfa40b6 | unbounded operator
This page is about unbounded linear operators on Hilbert spaces. For operators on Hilbert spaces, “bounded” and “continuous” are synonymous, so the first question to be answered is: Why consider unbounded, i.e., discontinuous operators in a category that is a subcategory of Top? The reason is simple: It is forced upon us both by applications, such as quantum mechanics, and by the fact that simple and useful operators like differentiation are not bounded. Happily, in most applications the operators considered retain some sort of “limit property”, namely the property of being “closed”. Although that seems to be negligible compared to continuity, it allows the development of a rich and useful theory, and as a consequence there is a tremendous amount of literature devoted to this subject.
One way of dealing with unbounded operators is via affiliated operators, see there.
Example: differentiation is unbounded
Let \mathcal{H} be the Hilbert space L 2()L^2(\mathbb{R}), and let TT be the differentiation operator defined on the dense subspace of Schwartz functions ff by Tf(x)f(x)Tf(x) \coloneqq f'(x). One might hope TT has a continuous extension to all of L 2L^2, but consider the sequence f k(x):=exp(k|x|)f_k(x) := \exp(-k|x|) for kk \in \mathbb{N}. Then we have Tf kf k=k\frac{\|Tf_k\|}{\|f_k\|} = k, so TT is unbounded.
Note that the domain of definition of an unbounded operator will generally be given only on a dense subspace, as in this example. Indeed, the existence of unbounded operators defined everywhere (on a Hilbert space) is non-constructive, relying on the Hahn–Banach theorem and refutable in dream mathematics.
After the definition we will look at some concepts that can be transferred from the bounded context here.
We will talk about the von Neumann algebra that contains all spectral projections? here.
Finally we give some counterexamples, i.e., phenomena that contradict the intuition built from bounded operators here.
An unbounded operator TT on a Hilbert space \mathcal{H} is a linear operator defined on a subspace DD of \mathcal{H}. DD is necessarily a linear submanifold. Usually one assumes that DD is dense in \mathcal{H}, which we will do, too, unless we indicate otherwise.
In particular every bounded operator A:A: \mathcal{H} \to \mathcal{H} is an unbounded operator (red herring principle).
Domains: a first look
Unbounded operators are not defined on the whole Hilbert space, so it is essential that, when talking about a specific unbounded operator, we are actually talking about the pair (T,D T)(T, D_T) of an operator TT together with its domain D TD_T. In particular two unbounded operators T,ST, S are equal iff their domains are equal, D T=D SD_T = D_S, and for all xD S=D Tx \in D_S=D_T we have Tx=SxTx = Sx.
If the domain is not specified, the default definition of the domain of a given operator TT is simply D T:={x|Tx}D_T := \{x \in \mathcal{H} \vert Tx \in \mathcal{H} \}.
Warning: if one composes two unbounded operators TT and SS, it may happen that D TD S={0}D_T \cap D_S = \{0\}. If we insist that all our unbounded operators are densely defined, we need as an additional assumption that D TD SD_T \cap D_S is dense to make sense of the composite TST S.
The Hellinger-Toeplitz theorem
• Theorem: Let AA be an everywhere defined linear operator on a Hilbert space \mathcal{H} that is symmetric. Then A is bounded.
For the definition of symmetric see below.
The Hellinger-Toeplitz theorem is a no-go theorem for quantum mechanics. Since it is known that operators essential for quantum mechanics are both symmetric and unbounded, we are led to conclude that they cannot be everywhere defined. This means that the problems that accompany only densely defined operators cannot be avoided.
This is a corollary to the closed graph theorem III.2 in the book
Closedness, selfadjointness, resolvent
Recall that the graph of an operator TT (or any function, in general) is the subset 𝒢 T:={(x,y)×|Tx=y}\mathcal{G}_T := \{(x, y) \in \mathcal{H} \times \mathcal{H} \vert Tx = y \}. The graph of a given operator need not be closed (in the product topology of ×\mathcal{H} \times \mathcal{H}). The notion that will be a surrogate for continuity is “closable”, defined as follows:
• Definition: Given an operator T with domain D TD_T, any operator TT' with larger domain that is equal to TT on D TD_T is called an extension of T, we write TTT \subset T'.
• Definition: An operator is closed if its graph is closed.
• Definition: An operator is closable if it has a closed extension. The smallest such extension is called the closure of TT and is denoted by T¯\overline{T}.
• Proposition (closure of graph is graph of closure): If an operator TT is closable, then the closure of its graph 𝒢¯\overline{\mathcal{G}} is the graph of an operator, and this operator is its closure.
The last part deserves some elaboration: Given an operator TT, we can always form the closure of its graph 𝒢\mathcal{G}. How can the closure not be the graph of an operator? Given a sequence (x n)(x_n) in 𝒢\mathcal{G}, such that both limits xlim nx nx \coloneqq \lim_{n \to \infinity} x_n and ylim nTx ny \coloneqq \lim_{n \to \infinity} T x_n exist, we have that (x,y)(x, y) is in the closure of 𝒢\mathcal{G}. Now it may happen that there is another point (x,y)(x, y') in the closure with yyy \neq y', which implies that the closure cannot be the graph of a single valued function.
We may assume without loss of generality that lim nx n=0\lim_{n \to \infinity} x_n = 0, so that we get as a characterisation of closability: if ylim nTx ny \coloneqq \lim_{n \to \infinity} T x_n exists, then y=0y=0. It TT were continuous, we would not have to assume that (Tx n)(Tx_n) is convergent, so this additional assumption tells us in what respect closability is weaker than continuity.
• Definition: For a closed operator TT a subset DD of D TD_T is called a core of TT if T| D¯=T\overline{T \vert_{D}} = T. In other words: if we restrict TT to DD and take the closure, we obtain again TT.
Example of an operator that is not closable
We let T *T^* be the adjoint of an operator TT. Note that for an only densly defined TT, the domain of the adjoint may be strictly larger.
• Definition (selfadjoint et al.): An operator is symmetric (or Hermitian) if T=T *| D TT = T^* \vert_{D_T} (the adjoint is restricted to the domain of TT). It is selfadjoint if it is symmetric and D T=D T *D_T = D_{T^*}. A symmetric operator is essentially selfadjoint if its closure is selfadjoint.
The difference of being symmetric and being selfadjoint is crucial, although there is a famous anecdote that seems to indicate otherwise:
• Anecdote of selfadjointness: Once upon a time John von Neumann thanked Werner Heisenberg for the invention of quantum mechanics, because this had led to the development of so much beautiful mathematics, adding that mathematics paid back a part of the debt by clarifying for example the difference of a selfadjoint operator and one that is only symmetric. Heisenberg replied: “What is the difference?”
Nevertheless theorems that assume an operator to be selfadjoint will be not applicable to an operator that is only symmetric. One example is the spectral theorem.
Example of a symmetric, but not selfadjoint, operator
The definition of the resolvent does not pose any problems compared to the bounded case:
• Definition: let TT be a closed operator on a Hilbert space \mathcal{H}. A complex number λ\lambda is in the resolvent set ρ(T)\rho(T) if λ𝟙T\lambda \mathbb{1} - T is a bijection of D TD_T and \mathcal{H} with bounded inverse. The inverse operator is called the resolvent R λ(T)R_{\lambda}(T) of TT at λ\lambda.
• Theorem: The resolvent set is an open subset of \mathbb{C} on which the resolvent is an analytic operator valued function. Resolvents at different points commute and we have
R λ(T)R μ(T)=(μλ)R μ(T)R λ(T) R_{\lambda}(T) - R_{\mu}(T) = (\mu - \lambda) R_{\mu}(T) R_{\lambda}(T)
The proof can be done as in the bounded case.
Commuting operators
The concept of commuting operators, which is of no problem in the bounded case, presents a conceptual difficulty in the unbounded one: for given operators A,BA, B we would like to be able to say whether they commute, although their composite may not have dense domain. For selfadjoint operators there is a solution to this problem: We know that in the bounded case, two selfadjoint operators commute iff their spectral projections commute. This suggests the
• Definition: two (possibly unbounded) selfadjoint operators commute iff all their spectral projections commute.
The spectral theorem shows that all bounded Borel functions of two commuting operators will also commute.
The following theorem states the reverse for two of the most important functions and shows that the definition of commutation above is reasonable:
• Theorem: let AA and BB be selfadjoint operators on a Hilbert space \mathcal{H}. Then the following three statements are equivalent:
(a) AA and BB commute
(b) if Imλ\operatorname{Im} \lambda and Imμ\operatorname{Im} \mu are 0\neq 0, then R λ(A)R_{\lambda}(A) and R μ(B)R_{\mu}(B) commute, as it is defined for bounded operators: R λ(A)R μ(B)=R μ(B)R λ(A)R_{\lambda}(A) R_{\mu}(B) = R_{\mu}(B) R_{\lambda}(A)
(c) for all t,st, s \in \mathbb{R} we have exp(itA)exp(isB)=exp(isB)exp(itA)\exp(itA) \exp(isB) = \exp(isB) \exp(itA)
Strongly continuous one-parameter semigroups
A strongly continuous one-parameter semigroup is an unitary representation of \mathbb{R} on \mathcal{H} where \mathbb{R} is seen as a topological group with respect to multiplication, see topological group. An explicit definition recalling these concepts is this:
• Definition: a function U:()U : \mathbb{R} \to \mathcal{B}(\mathcal{H}) is a one-parameter semigroup if the semigroup condition U(t+s)=U(t)U(s)U(t+s) = U(t)U(s) holds. If for every xx \in \mathcal{H} and tt 0t \to t_0 we have U(t)xU(t 0)xU(t)x \to U(t_0)x then it is strongly continuous. If every UU is an unitary operator, it is an unitary semigroup.
In the following a semigroup will be understood to be a one-parameter unitary strongly continuous semigroup.
In physics, one-parameter semigroups of this kind often represent the time evolution of a physical system described by an evolution equation.
• Theorem: a selft adjoint operator AA generates a semigroup via U(t):=exp(itA)U(t) := \exp(itA)
• Theorem (Stone’s theorem): let UU be a semigroup, then there is a selfadjoint operator AA such that U(t)=exp(itA)U(t) = \exp(i t A). This operator is often called the infinitesimal generator of UU.
These two theorems are essential for the Schrödinger picture of quantum mechanics, which describes a system by the Schrödinger equation, we have now a one-to-one correspondence of selfadjoint operators which can be seen as Hamilton operators (only special operators will be seen as describing actual physical systems, of course), and semigroups which describe the time evolution generated by the Hamilton operator.
As a trivial observation we add that Stone’s theorem is a (huge) generalization of the Taylor series, let f:f: \mathbb{R} \to \mathbb{R} that is (real) analytical in a neighborhood of 00, then we get for xx small enough:
f(h)= k=0 f n(0)n!h n=exp(ih(iddx)) f(h) = \sum_{k=0}^{\infty} \frac{f^{n}(0)}{n!} h^n = \exp(i h(-i \frac{d}{dx} ))
This shows that the operator iddx-i \frac{d}{dx} generates the semigroup of translations on the real line. Now we could, for example, use Stone’s theorem to prove that iddx-i \frac{d}{dx} is selfadjoint by proving that the translation group is strongly continous.
Subtleties resulting from domain issues
(This rather generic title will have to be revised.)
Nelson’s example of noncommuting exponentials
Nelson’s example shows that the rather involved definition of commutativity of two unbounded operators is well motivated, because a more naive one will have unwanted consequences. It is a counteraxample to the following conjecture:
• Conjecture (false!): Let AA and BB selfadjoint operators on a Hilbert space \mathcal{H} and DD a dense subset such that both AA and BB restricted to DD are essentially selfadjoint. Suppose that for all xDx \in D we have ABxBAx=0 A B x - B A x = 0. Then AA and BB commute.
We have alredy seen that on 2\mathbb{R}^2 we can define two essentially selfadjoint operators i x-i \partial_x and i y-i \partial_y that generate translations along the x-axis and the y-axis respectivly. Both the generated translations and the operators commute (the latter if applied to differentiable functions, of course).
The central idea of Nelson’s counterexample is to replace 2\mathbb{R}^2 by a Riemann surface with two sheets, such that walking east, then walking north takes you to one sheet, while walking north then walking east takes you to the other sheet.
We use the Riemann surface MM of f(z)=(z)f(z) = \sqrt(z). We give a brief exposition of it’s construction: Take two copies of \mathbb{C}, two “sheets”, let them call I and II. Cut both along (0,)(0, \infty), label the edge of the first quadrant along the cut as ++ and the edge of the fourth quadrant -. Then attach the ++ edge of I with the - of II and vice versa.
Let =L 2(M)\mathcal{H} = L^2(M) with respect to Lebesgue measure. As indicated in the idea-section we define A = i x-i \partial_x and B = i y-i \partial_y, this is with respect to the canonical chart on each sheet that projects it onto 𝒞\mathcal{C} and then identifies 𝒞\mathcal{C} with 2\mathcal{R}^2.
Let DD be the set of all smooth functions with compact support not containing 00. Then:
(a) AA and BB are essentially selfadjoint on DD
(b) AA and BB map DD onto DD
(c) for all xDx \in D we have ABx=BAxA B x = B A x
(d) exp(itA)\exp(i t A) and exp(isB)\exp(i s B) do not commute.
Only part (a) needs further explanation…
• Konrad Schmuedgen, Unbounded self-adjoint operators on Hilbert space, Springer GTM 265, 2012
Chapter VIII of the following classic volume is devoted to unbounded operators:
Nelson’s example is taken from the above reference, the original reference is this:
• Edward Nelson: Analytic Vectors Ann.Math. 70, p.572-615, 1959
category: analysis
Revised on October 17, 2014 15:35:07 by Zoran Škoda ( |
9cc62de205327c13 | Take the 2-minute tour ×
My question derives from reading a recent preprint (arXiv:1209.0827v1, in particular Section 4.1), but it can be phrased quite independently from that paper. The setup is as follows.
Let $A$ be the tridiagonal $n\times n$ matrix with $a_{ii} = -1$ on the main diagonal and $a_{i,i+1} = a_{i+1,i} = 2$ on the other two diagonals. We take the $n\times 1$ vector 1 (consisting of only 1's) as a right-hand side and look for the solution to the linear system of equations. It is not difficult to show that the determinant of the matrix is always an odd integer, the system has a unique solution for each $n$.
Question (asked by the authors). Are there infinitely many $n$ such that all entries of the solution vector are positive?
This would be interesting because each such $n$ gives rise to a solution with a particular property of a toy model for the energy transfer in a nonlinear Schrödinger equation.
Out of curiosity, I did a numerical search for solutions up to $n \sim 1000$ and got the following list of valid $n$ for which the solution is indeed nonnegative.
2, 3, 4, 8, 13, 18, 23, 42, 61, 80, 142, 204, 347, 490, 633, 776, 919, ...
I noticed the following for the sequence of consecutive differences, which starts with
1, 1, 4, 5, 5, 5, 19, 19, 19, 62, 62, 143, 143, 143, 143, 143...
Observation. This sequence seems to have the property that each entry is either the previous entry or it is the previous entry multiplied by the number of times the previous entry has appeared in the sequence in total (4 has appeared once, 5 has appeared three times) plus the biggest element that is strictly smaller. We do see that indeed $19 = 3\cdot5+4$, $62 = 3\cdot19+5$ and $143 = 2\cdot62+19$. Furthermore, each element bigger than 1 in the sequence of differences seems to be one larger than an element in the sequence of valid $n$.
One can now use the rule to create much larger matrices and check whether they have the property and this seems to work just fine, though I am not sure up to which size Mathematica as used by a numerical layman is trustworthy.
My question. Does the sequence of differences really observe this rule? Assuming it does, does it create all $n$ with the desired property? What decides whether the next term in the sequence of differences is identical to its predecessor or the sum of previous terms?
I am fairly confident that all of this is well-known (there seems to be a sort of construction algorithm behind it) and would be thankful for any references.
share|improve this question
OEIS (oeis.org) does not know both sequences... – Dirk Sep 10 '12 at 20:16
1 Answer 1
up vote 7 down vote accepted
Yes there are infinitely many such values of $n$ and the sequence satisfies the rule you observed. The proof is straightforward but technical.
Let $x_1,\dots,x_n$ be the solution. Add $x_0=0$ and $x_{n+1}=0$, then $2x_{k-1}-x_k+x_{k+1}=1$ for all $k=1,\dots,n$. Introduce $y_k=3x_k-1$, then $y_k$ satisfies a linear recurrence relation $$ 2y_{k-1}-y_k+2y_{k+1}=0 $$ and boundary conditions $y_0=y_{n+1}=-1$. And we are looking for solutions satisfying $y_k\ge -1$.
Solving the recurrence by the standard method yields that $$ y_k = A\sin(k\alpha)+B\cos(k\alpha) $$ for some constants $A$ and $B$, where $\alpha=\arccos\frac14$. (This number comes from the fact that the roots of the equation $2x^2-x+2=0$ are $\cos\alpha\pm i\sin\alpha$).
Since $y_0=-1$, we have $B=-1$. Then $y_k=-1$ if and only if $$ A = A_k := \frac{\cos(k\alpha)-1}{\sin(k\alpha)} = -\tan(k\alpha/2) $$ So for the solution with $y_{n+1}=-1$ we have $A=A_{n+1}$, and the relation $y_k\ge-1$ takes the form $$ \begin{cases} A_k \ge A_{n+1}, & A_{n+1}<0 \\ A_k \le A_{n+1}, & A_{n+1}>0 \end{cases} $$ (note that $\tan(k\alpha/2)$ and $\sin(k\alpha)$ are of the same sign).
So $n$ is included in the sequence iff $A_{n+1}$ is either the minimum of the positive $A_k$'s, or the maximum of the negative $A_k$'s ($k=1,\dots,n+1$). Observe that the order of $-A_k$'s in $\mathbb R$ is the same as of the numbers $k\alpha\bmod 2\pi$ in $(-\pi,\pi)$. So everything boils down to the study of the sequence $\alpha_k:=k\alpha\bmod 2\pi\in(-\pi,\pi)$. A number $n$ is included in the sequence iff $\alpha_{n+1}$ is the best approximation to 0 among entries of the same sign seen so far. Let's add 1 to all these $n$, so we can consider $\alpha_n$ rather than $\alpha_{n+1}$. The indices of the best approximations are $$ 1(+), 3(-),4(-),5(+),9(-),14(-),19(-),24(+),43(+),\dots $$ where the signs indicate whether the approximation is positive or negative. The rule for the best approximations is well-known (and easy to prove): the next index is the sum of the latest "positive" one and the latest "negative" one. For example, $19=5+14$, $24=5+19$, $43=24+19$. In other words, the last "positive" entry keeps adding itself to "negative" entries until a new "positive" one appears. Since $\alpha/\pi$ is irrational, zeroes do not happen. The sequence is infinite because the set $\{\alpha_k\}$ is dense in $(-\pi,\pi)$.
Which entries are "positive" depends on number-theoretical properties of the number $\alpha/2\pi$. More precisely, how many "positive"/"negative" ones follow in a row is determined by the expansion of this number into a continued fraction. In our case, this expansion should be aperiodic because the number is probably not a quadratic irrational. (I believe it is transcendental but I am not sure).
share|improve this answer
Very nice! I was writing out an argument by showing explicit formulae for the entries of the vector $A^{-1}1$, but got stuck at writing sufficient conditions of positivity because of too many Chebyshev polynomials :-) – Suvrit Sep 11 '12 at 8:31
Very, very nice. Thanks a lot! – Stefan Steinerberger Sep 11 '12 at 9:33
Your Answer
|
e68d92982a8f1eac | Problems with Bohmian mechanics
(also called the de Broglie-Bohm interpretation of quantum mechanics)
This is the most popular interpretation of quantum physics in terms of hidden variables, that are the "positions of particles". These hidden variables behave in a "deterministic" way that, in order to fit with the EPR paradox, must admit non-local (instantaneous, faster-than-light) interdependence between these positions throughout the universe, with respect to a supposed absolute time, i.e. simultaneity relation (partition of the universe into space-like 3D slices), that is hidden too.
There are several big problems with this interpretation.
Problem 1 : breaking relativistic invariance
Compatibility troubles with General Relativity
Bohmian mechanics breaks relativistic invariance by requiring the choice of absolute time, relatively to which its laws will be processed. This gap is well-known, but I did not read much about the full measure of how big is this gap. Since, just assuming an initial choice of an absolute frame at the beginning of time, as would fit a description in the framework of Special Relativity, does not suffice. Because the physical space-time does not (approximately) fit Special Relativity but General Relativity.
And in General Relativity, there is no natural way, even once chosen a space-like slice of space-time (that has no reason to possibly be a flat one) as a definition of the "initial time", to determine how space-time will have to be sliced into more "simultaneous" classes of events in the future. If we try the "simplest solution", that is taking the parameter of time defined as the age (the time spent since the Big Bang), so as to define simultaneity as equality of age, this cannot work because it will run into lots of singularities (especially at the centers of planets and stars).
On the other hand, quantum physics itself does not have any fundamental incompatibility with the symmetry principles of General Relativity, as explained by Carlo Rovelli, one of the founders of Loop Quantum Gravity, which is based on the care to fully keep both the founding principles of quantum physics and general relativity.
(They may reply that non-locality and the violation of the Lorentz invariance, is not just a problem with their interpretation, but a general problem with quantum physics, and more precisely of any interpretation that accepts actual randomness and the selection of an actual outcome of measurements, i.e. any interpretation that is not many-worlds).
It only interprets the Schrödinger equation (i.e. non-relativistic quantum mechanics), not Quantum Field Theory
The Stanford encyclopedia of philosophy article about Bohmian mechanics, tells in length what a wonderful interpretation it is of the (non-relativistic) Schrödinger wave equation. As if quantum mechanics and the Schrödinger equation were the same thing. I'm afraid they are not taking the full measure of the fact that the core of the argumentation is out of subject. As they only mention later, in a last section, the real big issue is where this interpretation fails :
The theory of quantum physics that is currently well-established as the proper description of physical reality, and that begs for an interpretation, is NOT the Schrödinger wave equation, but the full quantum theory, that is, Quantum Field Theory. Indeed without this extended framework it is not even possible to account for the possibility for an atom to absorb or emit a photon (while switching between energy levels), an event which occurs actually quite often ! Seriously, what a terrible subset of known physics is this little Schrodinger equation they are so proud of better explaining or visualizing in their way !
And adapting Bohmian mechanics to interpret quantum field theory is far from obvious. As replied to a question: De Broglie- Bohm Quantum Theory.
This article (Jul 2004), admits in the conclusion that "We leave open, however, three considerable gaps: the question of the process associated with the Klein–Gordon operator, the problem of removing cut-offs, and the issue of Lorentz invariance."
A detailed review of the situation can be found in Wallace's article The Quantum Measurement Problem: State of Play, section 7 (Relativistic quantum physics); there is a more recent article supporting Bohmian quantum field theory (2011).
And even if a candidate Bohmian Quantum Field Theory is offered, it remains to check that it will avoid the trouble with the treatment of randomness that is explained below.
Problem 2 : the nonsense of deterministic randomness (like in classical deterministic chaos)
Of course the unfalsifiable character of this theory (the ineffectiveness of assuming deterministic causes from hidden variables that cannot be checked) is clearly on the table, in the sense that it is "only an interpretation" without any different prediction (it is the same prediction : pure randomness). However, I will comment here in details how deep I see this problem.
In short : Bohmian mechanics claims to be a deterministic theory, but I see no way in which this supposed quality of "determinism" can make any meaningful sense : even if true it would still just be an empty box that does not provide any effective answer about whether the world is deterministic, probabilistic, or even... divinely guided.
The same remark goes for the classical "deterministic chaos", which many philosophers assume to be a form of determinism (strangely still often referring to it when discussing determinism, though classical mechanics is outdated as a fundamental description of the universe).
Indeed, classical mechanics is essentially of the same kind as Bohmian mechanics, in the sense that their quality of "determinism" is illusory for the same reason, that is explained below.
Why the "determinism" of classical mechanics (as a purely theoretical concept disconnected from this world) is an empty concept
Even a hypothetical universe totally described by some "deterministic" classical physics, can be said to produce absolutely random phenomena too, though leaving an explanatory gap in the nature of this randomness.
Giving a real number between 0 and 1, is essentially the same as giving an infinity of binary digits, that constitute the binary expansion of that number.
The computation of a continuous function of a number, can be made in successive approximations as computations from the unlimited data of its binary digits (taken in finite amount, as many as needed for the required accuracy of the result).
This way, a reasoning about a continuous variable, can be equivalently expressed as if it were about a potential infinity of discrete variables.
The reality assumed by classical mechanics, as well as the hidden variables assumed by Bohmian mechanics, are continuous variables. Thus, they are equivalently expressible as an infinite series of discrete variables (to be progressively explored).
Consider a physical system whose initial state is described by some quantity x=5.7843.....
After some chaotic processes, it arrives to a final state described by another quantity y that we can measure up to 5 decimals. The problem is that the determination of these 5 first decimals of y out of the exact value of x, must take account of, say, the first 1,000,010 decimals of x, as even a change in the millionth decimal of x may modify the value of y by several units.
In this situation, how can you meaningfully claim that "the first 5 measured decimals of y are not fundamentally random" ?
Indeed, even if a continuous variable is conceived as "well-determined", in practice, it only means that the first few digits are well-determined. For all practical purposes, beyond a certain scale of details, all the next digits down to infinity, have pretty well all the behavior of purely random data.
So, as an excellent physical approximation, we can "really" consider the first decimals of the final y as "absolutely random", in the sense that, if the first thousand decimals of x are [whatever], it will remain an excellent physical approximation to qualify its next thousands of decimals as "absolutely random".
Finally, a classical mechanistic chaotic system is quickly full of "absolute" randomness because the parameters of the initial state contain an infinity of digits of precision that, after the first few ones, necessarily turn out to be "absolutely random" for all practical purposes; and these random digits intervene in macroscopic behavior very quickly.
One might react by assuming that there is a fundamental difference between the "practical" and the "essential" versions of randomness. Indeed there are cases when this expression "for all practical purposes" can be used to refer to subjective human criteria that are relative, so that these "practical properties" can be dismissed as irrelevant when discussing what is essential. But there are other cases such as the one discussed here, where this distinction between the "essential" and the "emergent" or "practical" properties is effectively blurred - not just hidden but really broken. Such phenomena where essentialism is wrong, as the difference between what is "essential" and what is "practical", actually vanishes, can be found in other concepts, such as
A risk of trouble with Bohmian Quantum Field theories: the divergence of behavior in finite time
This problem with classical "deterministic" chaos might even be worsened by the possibility of a fractal process where the first decimal of a quantity depends on the 2nd decimal of its value 1 second before, which depends on the 3rd decimal 0.5 second before itself, which depends on the 4th decimal of its value 0.3 second before that, and so on, so that finally the number of decimals that it depends on, reaches infinity before a finite time interval.
And if I don't mistake, this is how things actually happen in fluids mechanics in the macroscopic formalization by partial differential equations (ignoring the behavior of individual molecules).
Now for Bohmian mechanics: even if such a divergence does not occur for the Schrödinger equation, how can we seriously expect to stay safe from such a divergence in any candidate Bohmian interpretation of quantum field theory, including compatibility with renormalization (that is also a sort of fractal process) ?
And even if such a divergence does not come from internal fractal processes, it can also come from the influence of the rest of the Universe, since according to Bohmian mechanics, the evolution of the "position" of a particle cannot be dissociated with what happens in the whole Universe - a form of "determinism" which does not start to make sense before you completely specify a cosmological model (is the Universe infinite, spherical, or something else ?).
How quantum mechanics resolves the nonsense of classical deterministic randomness - a lesson of meaningfulness that Bohmian mechanics tries to reject, but why ?
One "scientific skeptic" having no clue about physics pretended to incarnate modern scientific rationality by proclaiming this very stupid argument: "For all practical purposes, quantum effects can be neglected in macroscopic systems, so the world is classical, thus deterministic".
Yes, but classical (macroscopic) chaotic systems have the butterfly effect (sensitivity to very small differences of initial conditions). You can describe the butterfly effect as consequence of a classical physics which "is deterministic"; however, giving good approximate descriptions of general chaotic behaviors by mathematical models is one thing, but the issue of the "reality" of an underlying determinism in our universe, is another.
The physical fact in this universe, as shown by classical physics itself, is that the visible random results of such phenomena depend on smaller and smaller details of previous states when you trace back the causes to earlier and earlier states of the system. Finally, these details come from microscopic fluctuations, which are subject to quantum indeterminacy. Classical determinism is only an intermediate medium of communication across scales of processes, that displays macroscopic random effects coming from microscopic fluctuations of some earlier time, that come in fact from quantum randomness. No matter the possibility to theoretically deduce the properties of statistical mechanics from deterministic assumptions, the actual physical reality of our universe is that the macroscopic randomness displayed by classical deterministic chaos, as well as the particular thermic randomness with probabilities described by statistical mechanics (with its specific probability law of Boltzmann distribution), has a quantum origin, so that its randomness (and its obedience to the Boltzmann distribution in the thermic case), is exactly as pure and physically "absolute" as quantum randomness is. Indeed a careful examination of the macroscopic consequences of quantum mechanics (with its emergent process of entropy creation that goes on quite quickly) shows that the actually exact (not just the practically knowable or computable) probability law deduced from quantum theory on any possible measurement, quickly converges to the Boltzmann distribution at the given temperature.
Indeed : how can you even dream things to go otherwise in a continuous world ? How can you ever dream to conceive a mathematically well-defined causality law that describes an effect as depending on an infinity of details (an infinite amount of information) that the system contained at an earlier time ? How can you dream such a mathematical law to be an effective causality law at all, either in reality or as a meaningful mathematical prediction tool ?
The answer of quantum mechanics, to avoid the nonsense of a concept of a causality from an infinity of causes (such as the infinity of digits of continuous variables), is this one : there is an absolute limit to the amount of information that is physically present in a given local system, that can causally (physically) influence its future behavior. If you want to measure the details of the system beyond this amount of information that it physically contains, all you will get is newly created, purely random data that did not exist in the initial state of the system.
Still, while the amount of information in a quantum system is limited (finite), it is not a discrete pack of information but a continuous one (that fits with the continuous symmetries of geometry). This paradoxical combination is accomplished by the fact that this continuity is only a continuity in the values of probabilities of results when the system is measured, not a continuity of any "physically present quantity" that can be measured with unlimited accuracy. Thus a rejection of physical realism, as the continuity of the transition between the possibilities for 2 states to be identical or distinct, means that there is no physical reality of which state a system exactly is in. (As I once commented, a third option would be to break the continuous symmetries and opt for a digital universe, but this would leave us with no sense to make of the effective fundamental role of continuous symmetries in the formulation of theoretical physics)
Then, Bohmian mechanics reinterprets quantum randomness, as an unverifiable come-back of the previous concept of randomness, that is, as a classical deterministic chaos. Now, we come to a "deterministic" randomness that is both pure (with exact probabilities and with no means of prediction from any prior measurement) and with purely speculative and unobservable origin.
D. Wallace commented on the trouble with this in the 6.6 of his article ; I will comment it in my way.
The oddity of an intractable deterministic randomness : did you ever see it anywhere else ?
Do you know any other "random" process that once seemed so well random that a specific probability law was formulated, that seemed to be well confirmed by observations but without explanations, until deterministic causes were found that made the exact behavior effectively deterministic, i.e. predictable (or at least giving much better prediction tools than the former probability law) ? I cannot think of any.
Of course there are some obvious not-really-examples :
Anything more striking than this ?
Illustrating the problem by a tale
One day I saw an advertising panel for a new product, announced in this way :
The Perfect Random Generator
that Works in a
Completely Deterministic Manner
produced by BomBox, Inc.
Intrigued by such an amazing claim as the claim that perfectly random data can be produced deterministically, I went to a BomBox shop and asked the seller to explain about this new product.
He proudly explained to me why it was so valuable to have made this technological breakthrough, to guarantee that the numbers given by this random generator were indeed produced in a deterministic manner: the reason was, if the process generating these numbers was not deterministic, then we could not be exactly sure where this generated data came from, so that the guarantee of its perfectly unbiased random character would remain doubtful. The deterministic character of the process ensured that no uncontrolled bias could happen, and thus that the result indeed had the exact desired randomness property.
But I wanted to understand more, how such a seemingly paradoxical combination of qualities could indeed be implemented in a device. He refused to answer, as this was an industrial secret that could not be disclosed. In the face of my skepticism, he offered me the following guarantee : I would pay a regular monthly price to use it ; ifever I discovered any flaw in the working of this box, that it didn't have the advertised qualities, he would then soon come and fix it for free; or pay me back all what I paid from the start if he couldn't.
I accepted the deal and took the box with me. At home I started to test it, using a program designed to test randomness and look for patterns in series of digits. I discovered that the output indeed behaved randomly when I was there but started displaying patterns after some time I was absent. I noticed that the box had a little camera that took data from the room around, to be mixed with its internal computations so as to renew the randomness of its behavior. Indeed once this camera was completely hidden, the defects of its randomness came back.
I went back to the shop and reported that flaw. The seller offered to change the box, and as a compensation, he also offered me a solar panel to put on my roof to provide the needed electricity for the working of this box and even more uses if I need.
I accepted the deal, and the new box indeed appeared to offer a clearer and more systematic randomness than the previous one.
However I was still puzzled and undertook to examine the new system in more details. It turn out that the solar panel was hiding a little microphone recording the noise from the street and sending it to the BomBox as a means to reshuffle the internal data of its randomness computations.
Again I came back to the shop and complained that such a method was still an imperfect method of randomness generation, because the output from such a process could not be completely random as this box did not really produce its own randomness but only indirectly transmitted (after some reshuffling) the imperfect randomness of outside events that ultimately determined it, so that the detailed features of these events might still be a source of bias for the resulting so-called randomness.
Again he understood, and he offered to replace again the last BomBox with a new one that would also have the function of internet access.
It worked quite well, while I had disabled the flow of data from the solar panel, but I was still wondering how these random numbers could really be generated, so I decided to check the details of Internet connections it was making. I discovered that, while I was not supposed to be using the Internet connection, the BomBox was still making connections to the servers of the BomBox company, which answered in some visibly random manner. I understood that, once again, the BomBox was not really producing these random numbers from itself along, but only transmitting and reshuffling those provided by the Web servers of the BomBox company.
I went again to ask for explanations about this. The seller explained that this procedure did not jeopardize the quality of this randomness because the Web servers of the BomBox company were themselves perfectly reliable in producing random numbers in a deterministic manner, thanks to the full implementation of the real secret technology of the BomBox company, which the BomBox themselves did not contain, for fear that it might let its full secret technology to be leaked to competitors.
I replied that I still needed to check how the BomBox server was working, to verify both the deterministic and unbiased nature of its random behavior. I was invited to their servers room and could inspect the hardware of their servers. They indeed all used standard hardware known to behave deterministically. Still it wasn't clear if these servers were producing these random output all by their own or were only reshuffling any random data from somewhere else; and in the latter case I needed to check that other source too. They promised I could check all this in the next week.
The next week, I went back and noticed that these servers themselves received random data from another machine in another room, which I inspected too, and that itself was receiving random data from still another machine, which was also receiving data from somewhere else. I still wasn't satisfied, as I said I wanted to continue my inspection further, without any limit. But I had to wait one more week before continuing the inspections.
Meanwhile, they undertook to install a continuous assembly chain of electronic devices, each one connected by a wire to the next one being produced. The last machine I inspected last time was connected to the first produced device that was coming from a treadmill, which was connected to the next one behind, itself connected to a wire further in the treadmill whose end I could not see. Later as they approached, the last wire appeared connected to a next device, and, as they all approached, this all kept being repeated over and over again with more and more such devices each connected to the next one coming from the treadmill.
They told me : here you are ! You wanted the right to keep inspecting the devices producing the random data without limit. Now you see where the data comes from: it comes from a device, itself receiving its data from the next device, and so on. We are ready to let you inspect all these devices one after the other without any limit, as they arrive here in this chain from our production factory.
That sounded great, but I was still wondering : how can the last produced device in this chain provide its own random numbers, before receiving any further random data from the next device that will be added later to this chain ? I wanted to inspect the inside of the factory, to check how they were doing. But they refused to do so.
Instead, they gave up. They decided to cancel all transactions and give me back my previous payments.
But not only that, they were visibly fearful that other people might also inquire too much and eventually discover a loophole in their technology, and would come to complain in the same way. Indeed, as a last resort to prevent any such questions on the reliability of their technology from popping up again in the near future, they issued this bold job opening
Superman Wanted
for Super Mission
Contact BomBox, Inc
Then Superman came for this job, and was assigned the following task. He had to study the whole production chain of this series of devices each connected to the next one so as to generate a flow of random numbers towards the first one. Then his task would be to operate this production chain at a divergently increasing speed: he would produce the first copy of the device in one hour, the next one in half an hour, the third one in 15 minutes and so on, so as to finally get an infinity of them to be produced until the end of the second hour. Of course there was not enough place to collect that infinity of produced devices in this factory, but Superman would be able to complete his mission in outer space.
He accepted this mission, and started this production inside this factory until the last second of the second hour. At the last second, he completed his mission while skyrocketing up to the end of the Universe, leaving that seemingly infinite chain of devices hanging from the sky behind him.
Millions of people were amazed at that spectacular operation.
Then, as an effect of that demonstration of how an endless generation of perfectly random digits could indeed purely come out from that (infinite) chain of devices all behaving in a completely deterministic manner (without help from any other input), the profits of the BomBox Company started skyrocketing too. But I concluded to forget myself about this random generation service from then on, letting other people believe in this method if they liked.
Ten years later, I happened to see in the news that the BomBox Company had gone bankrupt. Intrigued by this news, I searched for further information, to understand how such a pitiful end could happen to this powerful company after the spectacular success it previously had. Here is what happened.
After Superman completed his mission, rumors started circulating on where he might have finally ended up at the very last moment, when reaching the end of the Universe. Some people simply assumed that the Universe was actually infinite, so that he could really complete the production of this chain of devices to an infinite length, while some other people speculated that, instead of this, he finally reached a border of the Universe very far away and met God there.
Based on the latter idea, a new cult emerged, claiming that, by performing some sorts of ceremonies, God might respond by carefully designing the data He would send to the very last link of this chain of devices, the one which was under His control, and this way finally influencing the supposedly random data of some BomBoxes around the world so as to serve some specified purposes.
Some time later, this cult claimed to have reached some success in their activities, pointing out, for example, that some of their members had won huge profits in lottery games. Though it was not exactly clear, after detailed examination, whether all these winners already belonged to that cult before their gain, or some of them were only enrolled there after this to get some fame (and also to benefit the cult leader's offer to let them meet a lot of female members of this cult) by pretending to have been a member before, the rumor could not be stopped : a wind of panic started blowing among BomBox users, afraid that, after all, the data generated by their BomBox might not be as reliably deterministic and unbiased as they were told, and decided to give it back and not use it any longer.
Problem 3 : Is it really worth saving Physical Reality at the expense of real physics ?
In short : the Bohmian picture does not fit with the proper, simple understanding of QM (especially the spin) and its link to classical physics.
Reference :
Guest post on Bohmian Mechanics, by Reinhard F. Werner, with a long discussion.
Are Bohmian trajectories real? On the dynamical mismatch between de Broglie-Bohm and classical dynamics in semiclassical systems , Matzkin, A. and Nurock, V. (2007)
Bohmian mechanics, a ludicrous caricature of Nature by Luboš Motl
Related arguments :
Problem 4 : Under-determinations of the theory
In short : there is not one Bohmian mechanics, but several possible versions, and no way to choose between them, except as a matter or taste.
See in D. Wallace's report "The Quantum Measurement Problem: State of Play", section 6.5 (p. 60)
Both last problems are also addressed in this preprint : The Bohmian interpretation of quantum mechanics : a pitfall for realism, Matzkin, A. and Nvrock, V (2004)
Problem 5 : a many-worlds interpretation in disguise - a terrible definition of "existence"
In the Artificial Intelligence theory of consciousness, the condition for an intelligent being to "really exist" as a conscious being, is the condition of "being effectively computed by physical processes". In Bohmian Mechanics, the introduction of a pointer to select which world is supposed to "exist", while other worlds remain "non-existing", does not change the fact that the continuing evolution of the wavefunction without collapse, still constitutes a real physical computation of the alternative worlds with all brains they may contain, and therefore, a way of still giving "real existence" to the minds they contain.
We can further expand this argument as follows :
How could a melody exist, not just as a succession of sounds but indeed as a melody, without somebody to hear it ?
In the same way, is the famous "hard problem of consciousness" : how can a thought exist, not just as a computation but as actually feeling something, in the absence of a non-physical soul inside the brain to actually feel what the brain is computing ?
By itself, the physical presence of a brain making some computations, is nothing else than a mathematical pointer that is "physically given" to that specific computation, that gives it the quality of being "physically computed" as opposed to other possible computations. But other possible computations, which do not receive this physical pointer, still mathematically exist, don't they ? How does the event of "physically" putting a pointer on a specific mathematical computation, give this computation a quality of "existing" more than any other possible computation ? You can arbitrarily decide to take this pointer as a definition of "conscious existence" for the whole computation (for this computation by a physically existing brain to constitute a mind), assuming it makes sense to speak about the "physical existence of a global computation" that is the event of having a long series elementary computations happening "together", while every elementary step of this computation is repeated lots of times here or there (but "not together") in the physical world.
But if this pointer is the definition of what "existence" means, then how can this name of "existence" still also mean what it was supposed to mean ?
In the same way, Bohmian mechanics introduces a assumption of presence of the arbitrary data of a mathematical pointer, that is, the hidden variables, to point to a specific world inside the many-worlds landscape. And then claims : this pointer is the definition of "physical existence" for a specific world inside the many-worlds landscape. But, if all what we have is an arbitrary mathematical pointer to a specific mathematical structure (world) inside a mathematical landscape of possible such structures (the many-worlds), then how can this pointer constitute the definition of "existence" for this specific world ? Because, the physical law governing this specific world (the formula of evolution of the hidden variables) is expressed as depending on the wavefunction, which is the many-worlds landscape, and thus requires this many-worlds landscape to already exist.
Reference : Solving the measurement problem: de Broglie-Bohm loses out to Everett
Problem 6 (synthesis of all the above) : the Universe looks like a conspiracy
By which metaphysical accident did the laws of Nature happen to take the exact shape of that sort of incredible conspiracy, endorsing in a so exact manner some extraordinary effective properties in sharp discrepancy with the properties of their underlying ontological causes ?
Listing these extraordinary properties in the same order they were listed above as "problems":
1. The world everywhere perfectly obeys relativistic invariance in all its many effective processes : no possible experiment can ever detect any fault in this invariance (measuring "which is the right frame") ; this invariance turns out to be one of the main pillars of the understanding of all physics, from gravitation to particle physics -- while no such invariance exists in the deep causes of all this.
2. Despite the fundamental necessity for the Universe to be deterministic, there precisely appears one clearly best predictive theory (ordinary Quantum Mechanics) expressed in the form of a very elegant and convenient mathematical structure of probability law (better than that of underlying causes), that cannot be turned around by any means (the source of randomness is absolutely hidden from any possible investigation).
3. The effectively relevant concepts to conveniently understand practical phenomena (the diverse observables of quantum physics and how concepts of classical physics come as their approximations) look very different from the shape of their real causes ("trajectories"....).
4. Specifications of the exact structure of the deep causality laws cannot be investigated by experiments.
5. The only thing that looks real for all practical purposes (the wavefunction, that has to be part of the equations to govern the evolution of "particle positions" but is not affected by them in return), is in fact a "totally unreal" thing (all the other worlds in the many-worlds picture it contains are "totally unreal"), while only the particle positions (which behave as mere hidden variables, totally unreal for all practical purposes), have the metaphysical quality of "real existence".
Would the Universe be schizophrenic or what ?
Links to other arguments and articles about Bohm's interpretation
Sites supporting Bohmian mechanics:
A blog article with discussion : Quick Impressions of Bohmian Mechanics
In Physics Stackexchange:
Why do people still talk about Bohmian mechanics/hidden variables
What is wrong with the De Broglie–Bohm theory a.k.a “Causal Interpretation” of quantum theory?
Quora: Why don't more physicists subscribe to pilot wave theory?
A former proponent's view : "Note that I spent myself a lot of time with the Bohmian interpretation before I rejected it as superficial, essentially for the reasons given by Werner. It didn't add any understanding but wasted a lot of my time"
On some early objections to Bohm’s theory
Bohmian mechanics listed among "lost causes in theoretical physics" by R. F. Streater
How is quantum wave collapse a more reasonable concept than pilot waves?
An article in Wired followed by a discussion
Bohm's Ontological Interpretation and Its Relations to Three Formulations of Quantum Mechanics
A discussion focusing on Bohmian mechanics (and secondarily, on many-worlds)
Paul Dirac's forgotten quantum wisdom. In this article, Luboš Motl speaks about "deluded pseudoscientists – from David Bohm to dozens of nameless crackpots".
It's been a tough week for hidden variable theories
Other hidden variables interpretations
Stochastic interpretation
Solipsistic hidden variables interpretation
Related pages
Introduction to quantum physics, that provides a quick and clear initiation, useful for those not familiar with it yet
Main page of arguments on quantum physics interpretations
The Many-Worlds interpretation
Mind Makes Collapse interpretation of quantum physics |
e1e127d00205d042 | The Molecular Universe
PDF file of this section
We have emphasized the unique model building power of contemporary computers. How can we exploit this power to learn about the behavior of atoms and molecules? Two broad strategies are available to contemporary scientists. In the first, we attempt to solve the Schrödinger equation - to learn in detail about the distribution and energies of the electrons in the molecule; and by calculating the energy of the molecule for a variety of geometries, we can try to predict the lowest energy structure. Remember that the Schrödinger equation can be solved exactly only for the hydrogen (and other one electron) atoms. And there exist a whole range of approximate methods for solving the equation for more complex systems. The best of these can now achieve highly accurate descriptions for molecules (and solids) of increasing size and complexity.
Here we see a molecule produced in nature, caffeine, the image below shows the calculated electron density for the important mineral MgSiO3. In both cases the computer provides a detailed understanding of the distribution of electrons which govern the molecule's or material's properties.
The electron density of a caffeine molecule.
Electron density in MgSiO3.
Indeed, calculations on molecules such as caffeine and crystals such as MgSiO3 are playing an increasingly important role in understanding the behavior of the nature's molecules and the materials from which the earth is constructed.
These methods yield an enormous wealth of information. They allow us to understand how electron density is redistributed when atoms form molecules - the process which, as we have seen, is at the very heart of chemical bonding. In addition, from our knowledge of the density of electrons we can calculate the electrostatic potentials around molecules - a crucial quantity which controls the way in which the molecule interacts with other surrounding molecules. The image on the left shows the electrostatic potential on a plane through a water molecule (left) and methanol molecules (right).
The electrostatic potential about H2O and CH3OH.
With the continuing growth of computer power, calculations of the distribution of electrons in molecules and hence of molecular structures and properties, will extend to increasingly large and complex molecules and solids. However, even with the largest, most sophisticated computers available today (and in the foreseeable future), such methods will be limited as the amount of computer time and memory increases rapidly with the size of the system studied. But for many of the problems which are posed by highly complex molecules or materials, we can use alternative simpler procedures, based on an old concept in chemistry and physics known as the interatomic potential. We can understand this simple idea by reference to the diagram on the left showing schematically how the energies of a pair of atoms vary with their distance apart (more accurately, their internuclear spacing). When the two atoms are close to each other, their nuclei will repel strongly, as will the 'core' electrons. The energy will therefore begin to increase very rapidly as seen in the diagram. Conversely, when they are a long way apart they will attract each other, due to chemical bonding or the weak 'non-bonding' interactions. The variation of energy with distance will therefore be expected to have a minimum as shown in the diagram.
Interatomic potential schematic. The separation of two atoms is shown on the x axis, the potential energy of the atom pair is plotted on the y axis.
Over the last fifty or so years, scientists have built up detailed information on such interatomic potentials for a wide variety of atom pairs. Moreover, the concept can be extended to larger numbers of atoms. We can learn how the energy of not just two, but three or four or larger numbers of atoms depend on their relative positions (although it may be more difficult to represent these diagramatically). The source of this information is first experiment: the properties of molecules and larger aggregates of atoms depend on the ways in which the atoms interact; and the experimental data may therefore be inverted to yield interatomic potentials. But of course direct calculations of the type described above are, as we have argued, to a growing extent able to give accurate information on the ways in which atoms interact. They are therefore an increasingly reliable source of high quality interatomic potentials.
A data base of good accurate interatomic potentials is a rich repository of information on matter at the atomic level. And the computer is well suited to the processing of this information content into detailed models of the behavior of molecules and materials. A variety of computational approaches may be used. The simplest and most economical (but very powerful) procedure is to exploit a fundamental principle of nature, that is that systems tend to run down hill. We will discuss in more detail the factors which control the direction in which systems evolve. But if we want to predict the structure of a molecule or a solid, it is normally a valid procedure to search for the lowest energy arrangement of atoms.
This simple approach - often referred to as energy minimization (or molecular mechanics when used specifically for molecules) is illustrated diagramatically on the left. We must, of course, start by guessing a structure for our molecule - the starting point, S, and we will consider later how these points are defined. But once the initial configuration of atoms has been defined, the procedure is simple. The atoms are systematically moved around in such away that the molecule is driven 'down hill' in energy. In practice, this 'minimization' proceeds by a succession of moves or 'iterations', each of which involves a small displacement of some or all the atoms; the energy of the system is calculated from the data base of potentials, as are normally the forces on each atom which guide the direction and magnitudes of the atomic displacements. The calculation ends when a minimum is located; that is a point is reached (like M on the left) which represents the lowest energy, and any displacement from this point will raise the energy.
This simple but powerful method can work remarkably well. On the left we show trajectories of in the first case a simple peptide and in the other a crystal structure that have been successfully energy minimized. In both cases, the minimized structure is an accurate model of reality; and in each the 'starting point' is remote from the final minimum.
Energy minimization and its effect on the conformation of a peptide molecule.
Adjusting the calculated structure of a benzene crystal structure to match a powder diffraction pattern.
Energy minimization is now a standard tool of the computational chemist. But the technique is, however, limited. There are fundamental limitations arising from the whole basis of the concept: the energy minimum is often a good guide to the structure of a molecule or crystal; but the science of thermodynamics tells us that other factors must be considered in a detailed theory of the stability of matter at the atomic level. A second (and related) limitation arises from the use of purely 'static' models. No account is taken of the fact that atoms are in constant motion; and the dynamical behavior of molecules influence their structures and stabilities. Then there may be serious practical limitations, of which possibly the most important is illustrated in above in the energy surface schematic. Here we illustrate the course of a minimization which starts from 'S' and ends at the point 'ML'. This is indeed a minimum, but it is not the lowest one; for if we run 'up hill' a little and pass over barrier B, we will move down into the lower minimum, 'M'. The minimum 'ML' is known as a 'local minimum'. And calculations starting from any point in the valley around this minimum will end up at this point. In complicated molecules (like those in biological systems) or solids, there may be many hundreds of local minima, and the task of finding the lowest energy or 'global' minimum becomes increasingly difficult. many different starting points must be sampled, and even then there is no guarantee that the global minimum will have been located. Minimization, therefore, although widely used by the computational scientist, is often only the first stage of a computational study of matter at the atomic level.
The next level of sophistication remedies one of the major deficiencies of minimization calculations. It represents the dynamic nature of matter at the atomic level.
The conceptual basis of the molecular dynamics technique is again simple. In the system to be simulated - for example, the molecules, cluster of molecules or crystal - we assign all atoms not just positions but velocities; and the latter are chosen with a target temperature in mind. We then allow the system to evolve in time subject to the forces acting on all the atoms (which can be calculated using the interatomic potentials). Normally, 'classical' mechanics is used; that is, the equations of motion first formulated by Newton are applied to the dynamics of atoms. This classical approach works surprisingly well for all but the lightest of atoms; and more sophisticated 'quantum' methods are available when, for example, the dynamics of hydrogen atoms are modeled.
Let us look, however, in a little more detail at the way in which simulations work. In many ways they work like a movie: they consist of a succession of snapshots closely spaced (in time) which when run together create a representation of continuous motion, as shown schematically on the left, where we consider four atoms: each has a position and a velocity at the 'first frame'.
Schematic illustration showing molecular dynamics time steps. (Click on the image to see the four stages illustrated in greater detail). The last illustration on this page shows snapshots from a molecular dynamics simulation of liquid water. This animation is quite large and may take a little while to download to your browser.
We now allow time to move forward a small amount (Dt) to the second frame. Since we know how fast the atoms are moving, it is straightforward to work out their new positions in the new frame; they will simply have moved by their velocity times Dt. But we also know the forces acting on the atoms, which we can work out from our knowledge of the interatomic potentials. And if we know the force, Newton's celebrated Second Law of motion tells us we can work out accelerations - that is the extent to which the velocities are changing with time - so we can calculate the new velocities in the new frame. We then move onto the next frame and the next; and in a real simulation tens, if not hundreds, of thousands of frames will be used in creating a dynamical record of the system.
The choice of the time step (Dt) is, of course, crucial. If it is too long, it will give a simulation which is like an old style 'jumpy' movie. And indeed the criterion for the choice of 'time step' (Dt) is very similar to that used in choosing the time lapse between frames in movies. It must be short compared with the time of any important process in the system: in movies this may be the time to kick a ball or to pull and fire a gun. In the molecular world, it is the time taken by a molecule to vibrate or rotate. The time taken to kick a ball is roughly a second or so; and the time between frames in movies is a hundredth to a thousandth of a second. Molecules have different time scales: they vibrate and rotate in periods of a million millionths (10=2>-12) seconds. The time steps must be a hundred or a thousand times smaller than this (i.e. 10-15 seconds). So if we collect a million time frames (roughly the limit of current calculations), we can watch how our simulated system evolves in a thousand millionths (10-9) seconds. But a lot can happen in molecules and materials during this period!
We show three typical dynamics trajectories below: the first relating to the biological molecule oxytocin. The second shows the trajectories of ions moving in a solid electrolyte at high temperatures. The migrating ions carry electric charge and hence this material is a good conductor of electricity - a fact which has stimulated interest in its use in advanced batteries. Finally we show an animation of successive frames from a simulation of water which shows that the position of water molecules rapidly change in liquid water.
A molecular dynamics trajectory calculated for the hormone oxytocin.
A molecular dynamics trajectory showing the tracks of lithium ions in a conducting polymer.
Snapshots from a simulation of liquid water. One of the molecules is shown in yellow to highlight its trajectory through the course of the simulation.
Molecular dynamics has proved to be an enormously productive technique for the computational chemist, physicist and biochemist. The method has been applied to an extraordinary variety of systems - proteins, pharmaceuticals, industrial polymers, complex crystals, solids, glassy materials and a huge range of liquids. Simulations using this method yield a rich range of information on both structures and dynamics of the system simulated. Indeed, molecular dynamics has often been referred to as a 'molecular microscope' which yields details on the behavior of matter at the atomic level which are inaccessible from experiment. Again, however, the method is limited. We have already discussed the time scale of the simulations - at most 10-9 seconds even with the biggest, most powerful modern computers. So the processes we are interested in must take place within this time. And although this is long on the molecular time scale, important events may often not be sampled in this period. Two typical examples relate firstly to the 'relaxation' and reorientation processes in polymers, which exert a crucial influence on the dynamics of these systems and which may have time scales in the range 10-12 to 101 seconds; and secondly to diffusion in solids (which although in some cases relatively fast as shown on the left) is normally slow, taking place by infrequent 'hops' of atoms between different sites in the solid. For the size of system simulated (which is discussed in greater detail below) few, if any, such jumps will take place over the time scale of a molecular dynamics simulation. So phenomena like corrosion, which depend on slow diffusion processes in solids, cannot be usefully investigated by this technique; although it turns out that simpler methods, akin to the energy minimization approach, but in which we chart the change in energy of a migrating atom as it moves between two sites, are effective for probing atomic dynamics in these systems.
A related but distinct point concerns the size of the simulated system. The simplest of statistical considerations suggests that the larger the number of atoms in the simulation, the greater will be the probability of observing events (such as the hops of atoms between sites as discussed above). Modern simulations are performed typically on thousands of atoms (although calculations on millions of atoms are becoming increasingly common). In simulating solids, which extend indefinitely in three dimensions, an ingenious strategy is used. The group of atoms being simulated (often referred to as the simulation box or cell) is surrounded by images of itself that extend to infinity as illustrated schematically (in two dimensions) on the left. A finite group of particles can therefore be made to represent an infinite system. In the case of crystals periodicity that this procedure imposes - the content of all the boxes are identical - may correspond to reality since, as we will see, the defining feature of crystals at the atomic level is that the arrangement of atoms is periodic. For liquids and glasses this periodicity is artificial; but the procedure is nevertheless useful.
Other computational methods will be discussed at appropriate points later on. We hope, however, that the discussion above has shown how the computer is a uniquely powerful tool in constructing models for matter at the atomic level. The horizons of the field of computational studies of matter have expanded rapidly in the last ten years. Developments in both hardware and software have played their role in this spectacular growth of the subject.
We have concentrated on the model building power of computers at the microscopic level. Their unique capabilities are also being exploited to model engineering, global and cosmic systems. We can extend our understanding of matter at the atomic level by exploring the properties and behavior of gigantic numbers of atoms and molecules. |
e32bf451da17d890 | The Classical and Quantum Mechanics of a Thin Ring Spinning About Two Axes
San José State University
Thayer Watkins
Silicon Valley
& Tornado Alley
The Classical and Quantum Mechanics
of a Thin Ring Spinning About Two Axes
The Classical Analysis
Consider a thin ring of mass M and radius R. (Here thin means that it is a line.) The linear mass density of the ring is ρ=M/(2πR). The moment of inertia I1 for spinning about an axis perpendicular to the plane of the ring which passes through the ring center is MR². There is another spin of a ring which is like the flipping of a coin. This is a rotation about an axis which is a diameter of the ring. The moment of inertia for this type of spinning is given by
I2 = ∫(R·cos(θ))²ρRdθ = ρR³∫cos²(θ)dθ
where the integration is from 0 to 2π. The integral of cos²(θ) from 0 to π is ½ so the integral from 0 to 2π is 1 and hence
I2 = ρR³ = (2πRρ)R²/(2π) = (1/(2π))MR².
This flipping rotation is of particular interest for the structure of nuclei. Experimental measurement indicate that that nuclei have at least approximately spherical shapes. If a circular band of nucleons rotates in this flipping fashion the dynamic appearance of the nucleus would be that of a sphere.
A spin can also take place perpendicular to the diameter considered above. The moment of inertia about that axis is same as that of I2. It unnecessarily complicates the analysis and will not be included now.
The angular momentum of the ring spinning at an angular rate of ω about the axis perpendicular to its plane is L=MR(Rω)=MR²ω. This means that
ω = L/(MR²)
and hence
½I1ω² = ½MR²(L²/(MR²)² = L²/(2MR²)
For the spin about a diameter the angular momentum Λ is found as follows.
Λ = ∫ (R·cos(θ))(R·cos(θ)Ω)ρdθ = R²ρΩ∫cos²(θ)dθ
with the integration being from 0 to 2π
which reduces to
Λ = R²ρΩ = R²(M/(2πR)Ω = MRΩ/(2π)
and hence
Ω = 2πΛ/(MR)
½I2Ω² = ½((1/(2π))MR²)(2πΛ/(MR))² = Λ²/(4πMR)
The kinetic energy of the spinning ring is then
K = L²/(2MR²) + Λ²/(4πMR)
and there is no potential energy. This means that the ring will continue spinning at the rates ω and Ω indefinitely. These rates can have any real values. This is the classical solution.
The Quantum Mechanical Analysis
To get the quantum mechanical solution the energy is expressed as
E = ½I1(dθ/dt)² + ½I2(dφ/dt)²
The momentum associated with θ is (∂E/∂θ) which is I1(dθ/dt). Let this be denoted as pθ. Thus (dθ/dt)=pθ/I1. Likewise the momentum associated with φ is pφ=(dφ/dφ)/I2. Therefore the Hamiltonian function for the the spinning ring is
H = pθ/(2I1) + pφ/(2I2)
and the Hamiltonian operator is
H^ = −h²(∂²/∂θ²) −h²(∂²/∂φ²)
and the time-independent
Schrödinger equation is
h²(∂²ψ/∂θ²) −h²(∂²ψ/∂φ²) = Eψ
If it is assumed that the wave function ψ is of the form Θ(θ)Φ(φ) then above equation becomes
h²Θ"(θ)Φ(φ) −h²Θ(θ)Φ"(φ) = EΘ"(θ)Φ(φ)
which upon division
by Θ(θ)Φ(φ) gives
h²Θ"/Θ −h²Φ"/Φ = E
This equation can be expressed as
−Θ"/Θ = Φ"/Φ + E/h²
The LHS is independent of φ and the RHS independent of θ. Therefore their common value must be a constant, say k². This means that
Θ"(θ) + k²Θ(θ) = 0
The solution is
Θ(θ) = A·cos(k(θ+θ0)
where A and θ0 are constants. By proper choice of the coordinate system θ0 can be made equal to zero. Then k(2π) must be an integral multiple of 2π. Therefore k must be an integer.
The other equation is
Φ"/Φ + E/h² = k²
or, equivalently
Φ" + (E/h² − k²)Φ = 0
From the previous case it is found that the coefficient of Φ must be a squared integer; i.e.,
(E/h² − k²) = q²
or, equivalently
E/h² = k² + q²
Thus the energy of the spinning ring is quantized such that it is equal to a multiple of of the sum of squared integers. The wave function then has the form
ψ(θ, φ) = cos(kθ)cos(qφ)
for 0≤θ≤2π
and 0≤φ≤2π
This means that the probability density function P(θ, φ) is given by
P(θ, φ) = cos²(kθ)cos²(qφ)
When a second rotation axis is included the quantization condition for energy involves the sum of the squares of three integers.
HOME PAGE OF applet-magic |
c4b3f01201b69b33 | Psychology Wiki
Quantum chemistry
34,146pages on
this wiki
This article is a historical introduction to the theoretical concepts of quantum chemistry. For information on computational methods in chemistry and more recent and/or technical aspects of quantum chemistry, see computational chemistry. For theoretical concepts related to chemistry see theoretical chemistry.
Quantum chemistry is a branch of theoretical chemistry, which applies quantum mechanics and quantum field theory to address issues and problems in chemistry. The description of the electronic behavior of atoms and molecules as pertaining to their reactivity is one of the applications of quantum chemistry. Quantum chemistry lies on the border between chemistry and physics, and significant contributions have been made by scientists from both fields. It has a strong and active overlap with the field of atomic physics and molecular physics, as well as physical chemistry.
Main article: History of quantum mechanics
\epsilon = h \nu \,
Electronic structure Edit
Main article: Computational chemistry#Electronic structure
The first step in solving a quantum chemical problem is usually solving the Schrödinger equation (or Dirac equation in relativistic quantum chemistry) with the electronic molecular Hamiltonian. This is called determining the electronic structure of the molecule. It can be said that the electronic structure of a molecule or crystal implies essentially its chemical properties.
Wave model Edit
The foundation of quantum mechanics and quantum chemistry is the wave model, in which the atom is a small, dense, positively charged nucleus surrounded by electrons. Unlike the earlier Bohr model of the atom, however, the wave model describes electrons as "clouds" moving in orbitals, and their positions are represented by probability distributions rather than discrete points. The strength of this model lies in its predictive power. Specifically, it predicts the pattern of chemically similar elements found in the periodic table. The wave model is so named because electrons exhibit properties (such as interference) traditionally associated with waves. See wave-particle duality.
Valence bond Edit
Main article: Valence bond theory
Although the mathematical basis of quantum chemistry had been laid by Schrödinger in 1926, it is generally accepted that the first true calculation in quantum chemistry was that of the German physicists Walter Heitler and Fritz London on the hydrogen (H2) molecule in 1927. Heitler and London's method was extended by the American theoretical physicist John C. Slater and the American theoretical chemist Linus Pauling to become the Valence-Bond (VB) [or Heitler-London-Slater-Pauling (HLSP)] method. In this method, attention is primarily devoted to the pairwise interactions between atoms, and this method therefore correlates closely with classical chemists' drawings of bonds.
Molecular orbital Edit
Main article: Molecular orbital theory
An alternative approach was developed in 1929 by Friedrich Hund and Robert S. Mulliken, in which electrons are described by mathematical functions delocalized over an entire molecule. The Hund-Mulliken approach or molecular orbital (MO) method is less intuitive to chemists, but has turned out capable of predicting spectroscopic properties better than the VB method. This approach is the conceptional basis of the Hartree-Fock method and further post Hartree-Fock methods.
Density functional theory Edit
Main article: Density functional theory
The Thomas-Fermi model was developed independently by Thomas and Fermi in 1927. This was the first attempt to describe many-electron systems on the basis of electronic density instead of wave functions, although it was not very successful in the treatment of entire molecules. The method did provide the basis for what is now known as density functional theory. Though this method is less developed than post Hartree-Fock methods, its lower computational requirements allow it to tackle larger polyatomic molecules and even macromolecules, which has made it the most used method in computational chemistry at present.
Chemical dynamics Edit
A further step can consist of solving the Schrödinger equation with the total molecular Hamiltonian in order to study the motion of molecules. Direct solution of the Schrödinger equation is called quantum molecular dynamics, within the semiclassical approximation semiclassical molecular dynamics, and within the classical mechanics framework molecular dynamics (MD). Statistical approaches, using for example Monte Carlo methods, are also possible.
Adiabatic chemical dynamics Edit
Main article: Adiabatic formalism or Born-Oppenheimer approximation
In adiabatic dynamics, interatomic interactions are represented by single scalar potentials called potential energy surfaces. This is the Born-Oppenheimer approximation introduced by Born and Oppenheimer in 1927. Pioneering applications of this in chemistry were performed by Rice and Ramsperger in 1927 and Kassel in 1928, and generalized into the RRKM theory in 1952 by Marcus who took the transition state theory developed by Eyring in 1935 into account. These methods enable simple estimates of unimolecular reaction rates from a few characteristics of the potential surface.
Non-adiabatic chemical dynamics Edit
Main article: Vibronic coupling
Non-adiabatic dynamics consists of taking the interaction between several coupled potential energy surface (corresponding to different electronic quantum states of the molecule). The coupling terms are called vibronic couplings. The pioneering work in this field was done by Stueckelberg, Landau, and Zener in the 1930s, in their work on what is now known as the Landau-Zener transition. Their formula allows the transition probability between two diabatic potential curves in the neighborhood of an avoided crossing to be calculated.
Quantum chemistry and quantum field theory Edit
The application of quantum field theory (QFT) to chemical systems and theories has become increasingly common in the modern physical sciences. One of the first and most fundamentally explicit appearances of this is seen in the theory of the photomagneton. In this system, plasmas, which are ubiquitous in both physics and chemistry, are studied in order to determine the basic quantization of the underlying bosonic field. However, quantum field theory is of interest in many fields of chemistry, including: nuclear chemistry, astrochemistry, sonochemistry, and quantum hydrodynamics. Field theoretic methods have also been critical in developing the ab initio Effective Hamiltonian theory of semi-empirical pi-electron methods.
See also Edit
Further readingEdit
• Pauling, L. (1954). General Chemistry, Dover Publications. ISBN 0-486-65622-5.
• Landau, L.D. and Lifshitz, E.M. Quantum Mechanics:Non-relativistic Theory(Course of Theoretical Physics vol.3) (Pergamon Press)
External links Edit
Nobel lectures by quantum chemists Edit
Around Wikia's network
Random Wiki |
aee6c1c8300448ca | Dyson formula
physics, mathematical physics, philosophy of physics
Surveys, textbooks and lecture notes
theory (physics), model (physics)
experiment, measurement, computable physics
The Dyson formula is an expression for the solution of the Schrödinger equation in time dependent quantum mechanics.
It expresses the parallel transport of the Hamiltonian operator regarded as a Hermitian-operator valued 1-form on the time axis.
In time-dependent quantum mechanics dynamics is encoded in a Lie-algebra valued 1-form
A=iHdtΩ 1(,𝔲(V)) A = i H \,d t \in \Omega^1(\mathbb{R}, \mathfrak{u}(V))
on the real line (time) with values in the Lie algebra of the unitary group on a Hilbert space VV.
For t=Id:t = Id : \mathbb{R} \to \mathbb{R} the canonical coordinate function and dtΩ 1()d t \in \Omega^1(\mathbb{R}) accordingly the corresponding canonical basis 1-form, the Lie-algebra valued coefficient
iH:𝔲(V) i H : \mathbb{R} \to \mathfrak{u}(V)
of AA is called the Hamiltonian operator. If HH is a constant function, one speaks of time-independent quantum mechanics.
A state of the system is a function
ψ:V. \psi : \mathbb{R} \to V \,.
A physical state is a solution to the Schrödinger equation
dψ+Aψ=0 d \psi + A \cdot \psi = 0
or equivalently
tψ=iHψ. \partial_t \psi = i H \psi \,.
This differential equation is that which defines the parallel transport of AA. Its unique solution for given ψ(0)\psi(0) is written
ψ(t)=Pexp( [0,t]iH(t)dt)ψ 0. \psi(t) = P \exp(\int_{[0,t]} i H(t) \, d t) \cdot \psi_0 \,.
This is called the Dyson formula . In the special case of time-independent quantum mechanics this becomes an ordinary exponential of an ordinary integral.
Revised on September 2, 2010 19:17:53 by Urs Schreiber ( |
c389bdf24fc136e7 | What Is Quantum Mechanics Good for?
Physicist James Kakalios, author of The Amazing Story of Quantum Mechanics, wants people to know what quantum physics has done for them lately--and why it shouldn't take the rap for New Age self-realization hokum such as The Secret
What could be weirder than quantum mechanics? This physics framework is responsible for any number of bizarre phenomena—theoretical cats that are simultaneously dead and alive, particles kilometers apart that can nonetheless communicate instantaneously, and indecisive photons that somehow go two directions at once.
But it is also responsible for the technological advances that make modern life possible. Without quantum mechanics there would be no transistor, and hence no personal computer; no laser, and hence no Blu-ray players. James Kakalios, a physics professor at the University of Minnesota, wants people to understand how much quantum mechanics influences our everyday lives—but to do so people must first understand quantum mechanics.
Kakalios sets out to tackle both tasks in The Amazing Story of Quantum Mechanics (Gotham Books, 2010), an accessible, mostly math-free treatment of one of the most complex topics in science. To keep things lively, the author intersperses illustrations and analogies from Buck Rogers stories and other classic science fiction tales. We spoke to Kakalios about his new book, what quantum mechanics has made possible, and how early sci-fi visions of the future compare with the present as we know it.
[An edited transcript of the interview follows.]
Is the purpose of this book to expose this world of quantum mechanics that people find so mysterious and point out that it's everywhere?
That's right. In fact, the introduction is called, "Quantum physics? You're soaking in it!"
There are many excellent books about the history and the philosophical underpinnings of quantum mechanics. But there didn't seem to be many that talked about how useful quantum mechanics is. Yes, the science has weird ideas and it can be confusing. But one of the most amazing things about quantum mechanics is that you can use it correctly and productively even if you're confused by it.
I present in the introduction what I call a "workingman's view" of quantum mechanics and show how if you accept on faith three weird ideas—that light is a photon; that matter has a wavelength nature associated with its motion; and that everything, light and matter, has an intrinsic angular momentum or spin that can only have discrete values—it turns out that you can then see how lasers work. You can see how a transistor works or your computer hard drive or magnetic resonance imaging—a host of technologies that we take for granted that pretty much define our life.
There were computers before the transistor; they used vacuum tubes as logic elements. To make a more powerful computer meant that you had to have more vacuum tubes. They were big, they generated a lot of heat, they were fragile. You had to make the room and the computer very large. And so if you used vacuum tubes, only the government and a few large corporations would have the most powerful computers. You wouldn't have millions of them across the country. There would be no reason to hook them all together into an Internet, and there would be no World Wide Web.
The beautiful aspect to this is the scientists who developed this were not trying to make a cell phone; they were not trying to invent a CD player. If you went to Schrödinger in 1926 and said, "Nice equation, Erwin. What's it good for?" He's not going to say, "Well, if you want to store music in a compact digital format..."
But without the curiosity-driven understanding of how atoms behave, how they interact with each other, and how they interact with light, the world we live in would be profoundly different.
So, to take one example, how does quantum mechanics make the laser possible?
One of the most basic consequences of quantum mechanics is that there is a wave associated with the motion of all matter, including electrons in an atom. Schrödinger came up with an equation that said: "You tell me the forces acting on the electron, and I can tell you what its wave is doing at any point in space and time." And Max Born said that by manipulating this wave function that Schrödinger developed, you could tell the probability of finding the electron at any point in space and time. From that, it turns out that the electron can only have certain discrete energies inside an atom. This had been discovered experimentally; this is the source of the famous line spectrum that atoms exhibit and that accounts for why neon lights are red whereas sodium streetlights have a yellow tinge. It has to do with the line spectra of their respective elements.
But to have an actual understanding of where these discrete energies come from—that electrons and atoms can only have certain energies and no other—is one of the most amazing things about quantum mechanics. It's as though you are driving a car on a racetrack and you are only allowed to go in multiples of 10 miles per hour. When you take that and you bring many atoms together, all of those energies broaden out into a band of possible energies.
The analogy that I use is you have an auditorium with an orchestra below and a balcony above. That means to go from the orchestra to the balcony you have to absorb some energy to be promoted from the orchestra to the balcony. Now if every seat in the orchestra is filled, and you want to move from one seat to another, you can't go anywhere unless you absorb some energy and are promoted up into the balcony, where there are empty seats and you can move around. What happens in a laser is you have a little mezzanine right below the balcony. You get promoted up to the balcony but then you fall and you sit in the mezzanine. And eventually, as the mezzanine gets filled up, there's a bunch of empty seats in the orchestra, where you came from.
One person gets pushed out of the mezzanine, and because of the way they talk to each other, they all go at the same time. They release energy as they fall back from the mezzanine into the orchestra, and that energy is in the form of light. Because they are all coming from the same row of seats in the mezzanine, all the light has exactly the same color. Since they all went at the same time, they are all coherently in phase. And if you have a lot of them up in the mezzanine, you can have a very high intensity beam of single-color light. That's a laser.
And just as Schrödinger couldn't have had any idea about what his equation would be used for, the same could be said of the laser, which now allows us to have CDs and DVDs and a lot of other things.
The same goes for the transistor. It was first developed to amplify radio signals, and you had transistor radios that replaced the vacuum tubes that were being used. Now they are also used as logic elements, 1s and 0s. If you apply a voltage to a transistor you can basically open or close a gate and allow electrons to flow through or make it very difficult for electrons to flow. And so you have two different current states, high and low, that you can call a 1 or a 0. You can combine them in clever ways to do logic operations with the 1s and 0s. You can encode information. You can develop a language of the 1s and 0s and manipulate them that way.
And again, I don't think that was the first thought of the people that developed the transistor. Look at all the things that it has brought out. There are probably more transistors in a standard hospital than there are stars in the Milky Way Galaxy, when you think about all the computers and all the electronic devices that we use just for medical applications. So it really has transformed life in a very profound way.
The real superheroes of science are a small handful of people who knew they were changing physics, but I don't think they recognized that they were also changing the future.
One of the ways you keep this book lively and accessible is to use anecdotes from early science fiction. How well have those predictions held up?
The main problem is that they believed that there was going to be a revolution in energy, which would lead to jet packs, death rays and flying cars. But what we got was a revolution in information. This information age, of course, came about because of semiconductors and solid-state physics, which were enabled by quantum mechanics.
A lot of these things go back to transistors and semiconductors. Is that in your view the biggest fundamental leap that quantum mechanics allowed us to make?
More than that, even. By discerning what were the fundamental rules that govern how atoms interact with each other and how they interact with light, you also have now a fundamental understanding of chemistry. There is a reason why the atoms are arranged the way they are in the periodic table of the elements, and it comes out naturally from the Schrödinger equation when you add in the Pauli exclusion principle. There is a really deep appreciation for why the world is the way it is.
Can you imagine living in a world before quantum mechanics?
We take all these things for granted. It's like the Louis C. K. YouTube clip—everything is amazing and nobody is happy.
"Quantum" is thrown around a lot as a label for things we don't understand, and we often lump a number of phenomena into the vague category of "quantum weirdness". Is that something that you'd like to see dissipate?
I would. It's used too much as a catchall. Proposing weird and counterintuitive ideas to explain observations, developing the consequences of these ideas and testing them further, and then, if they conform with reality, accepting them is not unique to quantum mechanics. It's what we call physics.
Also, because it has a reputation for weirdness, quantum mechanics is used too much as a justification for things that have nothing to do with quantum mechanics. There is an expression, "quantum woo," where people take a personal philosophy, such as the power of positive thinking or let a smile be your umbrella, and somehow affix quantum mechanics to it to try to make it sound scientific.
And make a lot of money doing so.
Yeah. It kind of seems to me to be at the same level as using mathematical knot theory or topology to justify crossing your fingers when you're making a wish. It has about as much relevance and justification.
Rights & Permissions
Share this Article:
Scientific American Holiday Sale
Scientific American Mind Digital
Get 6 bi-monthly digital issues
+ 1yr of archive access for just $9.99
Hurry this offer ends soon! >
Email this Article
Next Article |
bd3d1fb9d071d1e2 | general physics
Predicting a supernova precursor (on SN2010mc)
Dust Dendrites
After the dust settled down (literally…), we found something quite bizarre. The nylon walls developed very beautiful dust dendrites, akin to the more familiar frost dendrites (like these frost dendrites I have seen while living in Toronto).
Characteristics of an LC circuit using Equipartition
An LC circuit is one which has a capacitor and an inductor connected to each other. It exhibits oscillations just like a mass on a spring (a harmonic oscillator). In fact, the analogy is quite accurate with the capacitor playing the role of the spring and the inductor playing the role of the mass inertia.
Just like any harmonic oscillator, we can use equipartition to estimate the energy and frequency of oscillations using equipartition.
The average energy in the capacitor is:
Estimating the Size of the Hydrogen Atom (i.e., Bohr's Radius) using Equipartition
The standard way to obtain the size of the hydrogen atom, also known as Bohr's radius, is to solve Schrödinger equation for the hydrogen atom. This is a somewhat detailed calculation requiring the usage of generalized Laguerre polynomials and spherical harmonics. We can however bypass it, if we are only interested in an estimate of the hydrogen atom.
One method which we don't follow here, is to estimate the size of the atom using dimensional analysis. Instead, we shall do so using the principle of equipartition.
From Masada to the Messinian Salinity Crisis
Masada, the Dead Sea, the Messinian Salinity Crisis and Augustus Ceasar, all in one post.
A visit to Stromboli
Last May I had a conference in the island of Vulcano. During the conference I had a half day excursion to the Island of Stromboli, where I climbed the mountain and got to see one of the most impressive geological phenomena one can see... Here are my impressions, photos and even a movie of it.
Bush in a quantum entangled state
On my personal views of President Bush's visit to my humble town of Jerusalem. A few thoughts about quantum mechanics and the speed of sound.
Corpuscular Rays in St. Peter's Basilica (the Vatican)
A few days ago, I stayed in the Vatican (more about this one day symposium in another post). During the stay, I naturally visited St. Peter's Basilica. Near Bernini's Altar, I saw corpuscular rays. It may seem like some godly thing (quite appropriate for the location), but from a physicists point of view, it is simply scattering by dust particles. Here is one can say about this holy dust with the help of a little envelope.
blog topic:
A Nice Black Hole Merger Simulation
I recently stumbled upon a nice black hole merger simulation.
Since it is not in my habit of just regurgitating stuff I see on the internet, here is my added value. How can one estimate the quadrupole gravitational radiation of a binary? How close does the binary have to be for it to coalesce within the age of the universe?
Parhelic Circles, Ice Haloes and Sun dogs over Jerusalem
A few weeks ago, a few students saw a nice phenomenon in the sky. Knowing I liked this kind of stuff (and that I may be able to explain it), they called me out of the office to look at the sky. Above us was a nice and almost complete parhelic circle. Unlike the usual 22° halo, often seen around the moon and occasionally around the sun, the parhelic circle keeps a fixed angle from the horizon, not from the bright object.
Standing on ice - When is it possible?
Ice covering Grenadier Lake, High Park Toronto. The ice was 8 cm thick, enough to stand on. The small patch in the ice is a dead fish, frozen into the ice. How cold should it be and for how long to have ice thick enough to stand on?
Estimating Stellar Parameters from Energy Equipartition
Many physical systems have a tendency to equilibrate the energy between different subcomponents. Sometimes it is exact, and sometimes not. For example, in an acoustic wave, the wave's energy is on average half kinetic (motion of the gas) and half internal (pressure). In the interstellar medium, there is roughly the same energy in the different components, such as internal energy, turbulent energy, magnetic field and energy of the cosmic rays. Stars are no different. In the sun, there is roughly the same binding energy (which is negative) as there is thermal energy. This can also be shown using the virial theorem. In white dwarfs, the thermal energy is unimportant, instead, there the degenergy energy of the electrons is comparable to the binding energy. We can use this tendency for equipartition to estimate different stellar parameters.
Have you heard of the silent Earthquake?
Did you know that there are huge earthquakes which aren't felt? Did you know that similar waves appear everyday when two objects with dry friction between them start to move?
blog topic:
Exhale Condensation Calculator
If the temperature is low enough or the humidity high, you can observe condensation (i.e., "fog") forming in your exhaled breath. This calculator estimates whether your exhaled breath will condense, and if so, the range of mixing ratios for which the "fog" will form and the maximum condensed water content (the higher it is, the "thicker" the condensation).
If you're interested, there is a much more detailed explanations of the condensation process.
Exhaled Condensation Calculator
Using the above equations, we can calculate whether the exhaled air will condense. Enter the conditions of the outside air (and modify the exhaled air parameters if you wish), to see whether your breath will condense, or not.
Subscribe to RSS - general physics |
1fdeaa838d706ada | Chandrasekhar limit
From New World Encyclopedia
Jump to: navigation, search
The Chandrasekhar limit limits the mass of bodies made from electron-degenerate matter, a dense form of matter which consists of atomic nuclei immersed in a gas of electrons. The limit is the maximum nonrotating mass of an object that can be supported against gravitational collapse by electron degeneracy pressure. It is named after the astrophysicist Subrahmanyan Chandrasekhar, and is commonly given as being about 1.4 solar masses.
As white dwarfs are composed of electron-degenerate matter, no nonrotating white dwarf can be heavier than the Chandrasekhar limit.
As noted above, the Chandrasekhar limit is commonly given as being about 1.4 solar masses.[1][2]
Stars produce energy through nuclear fusion, producing heavier elements from lighter ones. The heat generated from these reactions prevents gravitational collapse of the star. Over time, the star builds up a central core which consists of elements that the temperature at the center of the star is not sufficient to fuse. For main-sequence stars with a mass below approximately 8 solar masses, the mass of this core will remain below the Chandrasekhar limit, and they will eventually lose mass (as planetary nebulae) until only the core, which becomes a white dwarf, remains. Stars with higher mass will develop a degenerate core whose mass will grow until it exceeds the limit. At this point the star will explode in a core-collapse supernova, leaving behind either a neutron star or a black hole.[3][4][5]
Computed values for the limit will vary depending on the approximations used, the nuclear composition of the mass, and the temperature.[6] Chandrasekhar[7], eq. (36),[8], eq. (58),[9], eq. (43) gives a value of
\frac{\omega_3^0 \sqrt{3\pi}}{2}\left ( \frac{\hbar c}{G}\right )^{3/2}\frac{1}{(\mu_e m_H)^2}.
Here, μe is the average molecular weight per electron, mH is the mass of the hydrogen atom, and ω30≈2.018236 is a constant connected with the solution to the Lane-Emden equation. Numerically, this value is approximately (2/μe)2 • 2.85 • 1030 kg, or 1.43 (2/μe)2 M, where M=1.989•1030 kg is the standard solar mass.[10] As \sqrt{\hbar c/G} is the Planck mass, MPl≈2.176•10−8 kg, the limit is of the order of MPl3/mH2.
Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons will increase upon compression, so pressure must be exerted on the electron gas to compress it. This is the origin of electron degeneracy pressure.
Radius-mass relations for a model white dwarf. The green curve uses the general pressure law for an ideal Fermi gas, while the blue curve is for a non-relativistic ideal Fermi gas. The black line marks the ultra-relativistic limit.
In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form P=K1ρ5/3. Solving the hydrostatic equation leads to a model white dwarf which is a polytrope of index 3/2 and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass.[11]
As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, we find that the equation of state takes the form P=K2ρ4/3. This will yield a polytrope of index 3, which will have a total mass, Mlimit say, depending only on K2.[12]
For a fully relativistic treatment, the equation of state used will interpolate between the equations P=K1ρ5/3 for small ρ and P=K2ρ4/3 for large ρ. When this is done, the model radius still decreases with mass, but becomes zero at Mlimit. This is the Chandrasekhar limit.[8] The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. μe has been set equal to 2. Radius is measured in standard solar radii[10] or kilometers, and mass in standard solar masses.
A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature.[6] Lieb and Yau[13] have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation.
In 1926, the British physicist Ralph H. Fowler observed that the relationship between the density, energy and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei which obeyed Fermi-Dirac statistics.[14] This Fermi gas model was then used by the British physicist E. C. Stoner in 1929 to calculate the relationship between the mass, radius, and density of white dwarfs, assuming them to be homogenous spheres.[15] Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately 1.37×1030 kg.[16] In 1930, Stoner derived the internal energy-density equation of state for a Fermi gas, and was then able to treat the mass-radius relationship in a fully relativistic manner, giving a limiting mass of approximately (for μe=2.5) 2.19 • 1030 kg.[17] Stoner went on to derive the pressure-density equation of state, which he published in 1932.[18] These equations of state were also previously published by the Russian physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter.[19] Frenkel's work, however, was ignored by the astronomical and astrophysical community.[20]
A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas. In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state,[11] and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above.[12][7][21][8] Chandrasekhar reviews this work in his Nobel Prize lecture.[9] This value was also computed in 1932 by the Soviet physicist Lev Davidovich Landau,[22] who, however, did not apply it to white dwarfs.
Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Stanley Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied:
The star has to go on radiating and radiating and contracting and contracting until, I suppose, it gets down to a few km. radius, when gravity becomes strong enough to hold in the radiation, and the star can at last find peace. … I think there should be a law of Nature to prevent a star from behaving in this absurd way![23]
Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law P=K1ρ5/3 universally applicable, even for large ρ.[24] Although Bohr, Fowler, Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar.[25] Through the rest of his life, Eddington held to his position in his writings,[26][27][28][29][30] including his work on his fundamental theory.[31] The drama associated with this disagreement is one of the main themes of Empire of the Stars, Arthur I. Miller's biography of Chandrasekhar.[25] In Miller's view:
Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing. As a result, Chandra's work was almost forgotten.[25], p. 150
The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various points in a star's life, the nuclei required for this process will be exhausted, and the core will collapse, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse.[32]
If a main-sequence star is not too massive (less than approximately 8 solar masses), it will eventually shed enough mass to form a white dwarf having mass below the Chandrasekhar limit, which will consist of the former core of the star. For more massive stars, electron degeneracy pressure will not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities will destroy the star completely.)[3][4][5][33] During the collapse, neutrons are formed by the capture of electrons by protons, leading to the emission of neutrinos.[32], pp. 1046–1047. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy which is on the order of 1046 joules (100 foes.) Most of this energy is carried away by the emitted neutrinos.[34] This process is believed to be responsible for supernovae of types Ib, Ic, and II.[32]
Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbon-oxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. It is believed that, as the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This results in an increasing rate of fusion reactions, eventually igniting a thermonuclear flame which causes the supernova.[35], §5.1.2
Strong indications of the reliability of Chandrasekhar's formula are:
1. Only one white dwarf with a mass greater than Chandrasekhar's limit has ever been observed. (See below.)
2. The absolute magnitudes of supernovae of Type Ia are all approximately the same; at maximum luminosity, MV is approximately -19.3, with a standard deviation of no more than 0.3.[35], (1) A 1-sigma interval therefore represents a factor of less than 2 in luminosity. This seems to indicate that all type Ia supernovae convert approximately the same amount of mass to energy.
A type Ia supernova apparently from a supra-limit white dwarf
On April 2003, the Supernova Legacy Survey observed a type Ia supernova, designated SNLS-03D3bb, in a galaxy approximately 4 billion light years away. According to a group of astronomers at the University of Toronto and elsewhere, the observations of this supernova are best explained by assuming that it arose from a white dwarf which grew to twice the mass of the Sun before exploding. They believe that the star, dubbed the "Champagne Supernova" by David R. Branch, may have been spinning so fast that centrifugal force allowed it to exceed the limit. Alternatively, the supernova may have resulted from the merger of two white dwarfs, so that the limit was only violated momentarily. Nevertheless, they point out that this observation poses a challenge to the use of type Ia supernovae as standard candles.[36][37][38]
See also
1. Bethe, Hans A., and Gerald Brown. "How A Supernova Explodes" pages 51–62, in Bethe, Hans Albrecht, Gerald Edward Brown, and Chang-Hwan Lee. 2003. Formation And Evolution of Black Holes in the Galaxy: Selected Papers with Commentary (River Edge, NJ: World Scientific. ISBN 981238250X), 55.
2. Mazzali, P.A., F.K. Röpke, S. Benetti, and W. Hillebrandt. 2007. A Common Explosion Mechanism for Type Ia Supernovae. Science. 315(5813): 825–828.
3. 3.0 3.1 Koester, D., and D. Reimers. 1996. White dwarfs in open clusters. VIII. NGC 2516: a test for the mass-radius and initial-final mass relations. Astronomy and Astrophysics. 313:810–814. Retrieved February 9, 2009.
4. 4.0 4.1 Williams, Kurtis A., M. Bolte, and Detlev Koester. 2004. An Empirical Initial-Final Mass Relation from Hot, Massive White Dwarfs in NGC 2168 (M35). Astrophysical Journal. 615(1):L49–L52. Retrieved February 9, 2009.
5. 5.0 5.1 Heger, A., C.L. Fryer, S.E. Woosley, N. Langer, and D.H. Hartmann. 2003. How Massive Single Stars End Their Life. Astrophysical Journal. 591(1):288–300. Retrieved February 9, 2009.
6. 6.0 6.1 Timmes, F.X., S.E. Woosley, and Thomas A. Weaver. 1996. The Neutron Star and Black Hole Initial Mass Function. Astrophysical Journal. 457:834–843. Retrieved February 9, 2009.
7. 7.0 7.1 Chandrasekhar, S. 1931. The Highly Collapsed Configurations of a Stellar Mass. Monthly Notices of the Royal Astronomical Society. 91:456–466. Retrieved February 9, 2009.
8. 8.0 8.1 8.2 Chandrasekhar, S. 1935. The Highly Collapsed Configurations of a Stellar Mass (second paper). Monthly Notices of the Royal Astronomical Society. 95:207—225. Retrieved February 9, 2009.
9. 9.0 9.1 Chandrasekhar, Subrahmanyan. 1983. On Stars, Their Evolution and Their Stability. Nobel Prize lecture. Retrieved February 9, 2009.
10. 10.0 10.1 Standards for Astronomical Catalogues, Version 2.0, section 3.2.2. Retrieved February 9, 2009.
11. 11.0 11.1 Chandrasekhar, S. 1931. The Density of White Dwarf Stars. Philosophical Magazine, 7th series. 11:592–596.
12. 12.0 12.1 Chandrasekhar, S. 1931. The Maximum Mass of Ideal White Dwarfs. Astrophysical Journal. 74:81–82. Retrieved February 9, 2009.
13. Lieb, Elliott H. and Horng-Tzer Yau. 1987. A rigorous examination of the Chandrasekhar theory of stellar collapse. Astrophysical Journal. 323:140–144. Retrieved February 9, 2009.
14. Fowler, R.H. 1926. On Dense Matter. Monthly Notices of the Royal Astronomical Society. 87:114–122. Retrieved February 9, 2009.
15. Stoner, Edmund C. 1929. The Limiting Density of White Dwarf Stars. Philosophical Magazine, 7th series. 7:63–70.
16. Anderson, Wilhelm. 1929. Über die Grenzdichte der Materie und der Energie. Zeitschrift für Physik. 56(11–12):851–856. Retrieved February 9, 2009.
17. Stoner, Edmund C. 1930. The Equilibrium of Dense Stars. Philosophical Magazine, 7th series. 9:944–963.
18. Stoner, Edmund C. 1932. The minimum pressure of a degenerate electron gas. Monthly Notices of the Royal Astronomical Society. 92:651–661. Retrieved February 9, 2009.
19. Frenkel, J. 1928. Anwendung der Pauli-Fermischen Elektronengastheorie auf das Problem der Kohäsionskräfte. Zeitschrift für Physik. 50(3–4):234–248. Retrieved February 9, 2009.
20. Yakovlev, D.G. 1994. The article by Ya I Frenkel' on `binding forces' and the theory of white dwarfs. Physics Uspekhi. 37(6):609–612. Retrieved February 9, 2009.
21. Chandrasekhar, S. 1934. Stellar Configurations with degenerate Cores. The Observatory. 57:373–377. Retrieved February 9, 2009.
22. Landau, L.D. 1932. "On the Theory of Stars," in D. ter Haar ed. 1965. Collected Papers of L.D. Landau. New York, NY: Gordon and Breach.; originally published in Phys. Z. Sowjet. 1:285.
23. Meeting of the Royal Astronomical Society, Friday, 1935 January 11. The Observatory. 58:33–41. Retrieved February 9, 2009.
24. Eddington, A.S. 1935. On "Relativistic Degeneracy". Monthly Notices of the Royal Astronomical Society. 95:194–206. Retrieved February 9, 2009.
25. 25.0 25.1 25.2 Miller, Arthur I. 2005. Empire of the Stars: Obsession, Friendship, and Betrayal in the Quest for Black Holes. Boston, MA; New York, NY: Houghton Mifflin. ISBN 061834151X.
26. The International Astronomical Union meeting in Paris, 1935. The Observatory. 58:257–265. page 259. Retrieved February 9, 2009.
27. Eddington, A.S. 1935. Note on "Relativistic Degeneracy". Monthly Notices of the Royal Astronomical Society. 96:20–21. Retrieved February 9, 2009.
28. Eddington, Arthur. 1935. The Pressure of a Degenerate Electron Gas and Related Problems, Arthur Eddington. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences. 152:253–272.
29. Eddington, Arthur. 1936. Relativity Theory of Protons and Electrons. Cambridge, UK: Cambridge University Press. chapter 13.
30. Eddington, A.S. 1940. The physics of white dwarf matter. Monthly Notices of the Royal Astronomical Society. 100:582–594. Retrieved February 9, 2009.
31. Eddington, A.S. 1946. Fundamental Theory. Cambridge, UK: Cambridge University Press. pages 167;43–45.
32. 32.0 32.1 32.2 Woosley, S.E., A. Heger, and T.A. Weaver. 2002. The evolution and explosion of massive stars. Reviews of Modern Physics. 74(4):1015–1071. Retrieved February 9, 2009.
33. Schaffner-Bielich, Jürgen. 2005. Strange quark matter in stars: a general overview. Journal of Physics G: Nuclear and Particle Physics. 31(6):S651–S657. Retrieved February 9, 2009.
34. Lattimer, J.M. and M. Prakash. 2004. The Physics of Neutron Stars. Science. 304(5670):536–542. Retrieved February 9, 2009.
35. 35.0 35.1 Hillebrandt, Wolfgang, and Jens C. Niemeyer. 2000. Type IA Supernova Explosion Models. Annual Review of Astronomy and Astrophysics. 38:191–230. Retrieved February 9, 2009.
36. The weirdest Type Ia supernova yet. LBL press release. Retrieved February 9, 2009.
37. Champagne Supernova Challenges Ideas about How Supernovae Work. Retrieved February 9, 2009.
38. Howell, D. Andrew et al. 2006. The type Ia supernova SNLS-03D3bb from a super-Chandrasekhar-mass white dwarf star. Nature. 443:308–311. Retrieved February 9, 2009.
• Bethe, Hans Albrecht, Gerald Edward Brown, and Chang-Hwan Lee. 2003. Formation And Evolution of Black Holes in the Galaxy: Selected Papers with Commentary. River Edge, NJ: World Scientific. ISBN 981238250X.
• Miller, Arthur I. 2005. Empire of the Stars: Obsession, Friendship, and Betrayal in the Quest for Black Holes. Boston, MA: Houghton Mifflin. ISBN 061834151X.
• Wali, Kameshwar C. 1992. Chandra: A Biography of S. Chandrasekhar. Chicago, IL: University of Chicago Press. ISBN 0226870553.
External links
All links retrieved May 3, 2013.
Research begins here... |
c96b2267c2136afa | lördag 21 januari 2017
The Origin of Fake Physics
Peter Woit on gives on Not Even Wrong a list of fake physics most of which can be traced back to the fake physics character of Schrödinger's linear multi-dimensional equation, as exposed in recent posts.
Woit's list of fake physics thus includes different fantasies of multiversa all originating from the multi-dimensional form of Schrödinger's equation giving each electron its own separate 3d space/universe to dwell in.
But the linear multi-d Schrödinger equation is a postulate of modern physics picked from out of the blue as a ready-made and as such like a religious dogma beyond human understanding and rationality.
Why modern physics has been driven into such an unscientific approach remains to be understood and exposed, and discussed...
The standard view is presented by David Gross as follows:
• Quantum mechanics emerged in 1900, when Planck first quantized the energy of radiating oscillators.
• Quantum mechanics is the most successful of all the frameworks that we have discovered to describe physical reality. It works, it makes sense, and it is hard to modify.
• Quantum mechanics does make sense, although the transition, a hundred years ago, from classical to quantum reality was not easy.
• The freedom one has to choose among different, incompatible, frameworks does not influence reality—one gets the same answers for the same questions, no matter which framework one uses.
• That is why one can simply “shut up and calculate.” Most of us do that most of te time.
• By now...we have a completely coherent and consistent formulation of quantum mechanics that corresponds to what we actually do in predicting and describing experiments and observations in the real world.
• For most of us there are no problems.
• Nonetheless, there are dissenting views.
So, the message is that quantum mechanics works if you simply shut up and calculate and don't ask if it makes sense, as physicists are being taught to do, but here are dissenting views...
Note that the standard idea ventilated by Gross is that quantum mechanics somehow emerged from Planck's desperate trick of "quantisation" of blackbody radiation 1900 when taking on the mission of explaining the physics of radiation while avoiding the "ultra-violet catastrophe" believed to torpedo classical wave mechanics. Planck never believed that his trick had a physical meaning and in fact the trick is not needed because an explanation can be given within classical wave mechanics in the form of computational blackbody radiation with the ultraviolet catastrophe not showing up.
This is what Anthony Leggett, Nobel Laureate and speaker at the 90 Years of Quantum Mechanics Conference, Jan 23-26, 2017, says (in 1987):
• If one wishes to provoke a group of normally phlegmatic physicists into a state of high animation—indeed, in some cases strong emotion—there are few tactics better guaranteed to succeed than to introduce into the conversation the topic of the foundations of quantum mechanics, and more specifically the quantum measurement problem.
• I do not myself feel that any of the so-called solutions of the quantum measurement paradox currently on offer is in any way satisfactory.
• I am personally convinced that the problem of making a consistent and philosophically acceptable 'join' between the quantum formalism which has been so spectacularly successful at the atomic and subatomic level and the 'realistic' classical concepts we employ in everyday life can have no solution within our current conceptual framework;
• We are still, after three hundred years, only at the beginning of a long journey along a path whose twists and turns promise to reveal vistas which at present are beyond our wildest imagination.
• Personally, I see this as not a pessimistic, but a highly optimistic, conclusion. In intellectual endeavour, if nowhere else, it is surely better to travel hopefully than to arrive, and I would like to think that the generation of students now embarking on a career in physics, and their children and their children's children, will grapple with questions at least as intriguing and fundamental as those which fascinate us today—questions which, in all probability, their twentieth-century predecessors did not even have the language to pose.
The need of a revision, now 30 years later, of the very foundations of quantum mechanics is even more clear, 90 years after conception. The starting point must be the wave mechanics of Schrödinger without particles, probabilities, multiversa, measurement paradox, particle-wave duality, complementarity and quantum jumps with atom microscopics described by the same continuum mathematics as the macroscopic world.
PS Is quantum computing fake physics or possible physics? Nobody knows since no quantum computer has yet been constructed. But the hype/hope is inflated: perhaps by the end of the year...
Inga kommentarer:
Skicka en kommentar |
9ff5ff3ac2637ee3 | Collapse Theories
First published Thu Mar 7, 2002; substantive revision Tue Feb 16, 2016
Quantum mechanics, with its revolutionary implications, has posed innumerable problems to philosophers of science. In particular, it has suggested reconsidering basic concepts such as the existence of a world that is, at least to some extent, independent of the observer, the possibility of getting reliable and objective knowledge about it, and the possibility of taking (under appropriate circumstances) certain properties to be objectively possessed by physical systems. It has also raised many others questions which are well known to those involved in the debate on the interpretation of this pillar of modern science. One can argue that most of the problems are not only due to the intrinsic revolutionary nature of the phenomena which have led to the development of the theory. They are also related to the fact that, in its standard formulation and interpretation, quantum mechanics is a theory which is excellent (in fact it has met with a success unprecedented in the history of science) in telling us everything about what we observe, but it meets with serious difficulties in telling us what is. We are making here specific reference to the central problem of the theory, usually referred to as the measurement problem, or, with a more appropriate term, as the macro-objectification problem. It is just one of the many attempts to overcome the difficulties posed by this problem that has led to the development of Collapse Theories, i.e., to the Dynamical Reduction Program (DRP). As we shall see, this approach consists in accepting that the dynamical equation of the standard theory should be modified by the addition of stochastic and nonlinear terms. The nice fact is that the resulting theory is capable, on the basis of a single dynamics which is assumed to govern all natural processes, to account at the same time for all well-established facts about microscopic systems as described by the standard theory as well as for the so-called postulate of wave packet reduction (WPR). As is well known, such a postulate is assumed in the standard scheme just in order to guarantee that measurements have outcomes but, as we shall discuss below, it meets with insurmountable difficulties if one takes the measurement itself to be a process governed by the linear laws of the theory. Finally, the collapse theories account in a completely satisfactory way for the classical behavior of macroscopic systems.
Two specifications are necessary in order to make clear from the beginning what are the limitations and the merits of the program. The only satisfactory explicit models of this type (which are essentially variations and refinements of the one proposed in Ghirardi, Rimini, and Weber (1986), and usually referred to as the GRW theory) are phenomenological attempts to solve a foundational problem. At present, they involve phenomenological parameters which, if the theory is taken seriously, acquire the status of new constants of nature. Moreover, the problem of building satisfactory relativistic generalizations of these models which seemed extremely difficult up to few years ago, has seen some significant improvements. More important, such improvements have elucidated some crucial points and have made clear that there is no reason of principle preventing to reach this goal.
In spite of their phenomenological character, we think that Collapse Theories have a remarkable relevance, since they have made clear that there are new ways to overcome the difficulties of the formalism, to close the circle in the precise sense defined by Abner Shimony (1989), which until a few years ago were considered impracticable, and which, on the contrary, have been shown to be perfectly viable. Moreover, they have allowed a clear identification of the formal features which should characterize any unified theory of micro and macro processes. Last but not least, Collapse theories qualify themselves as rival theories of quantum mechanics and one can easily identify some of their physical implications which, in principle, would allow crucial tests discriminating between the two. To get really stringent indications from such tests requires experiments involving technological techniques which have been developed only very recently. Actually, it is just due to remarkable improvements in dealing with mesoscopic systems and to important practical steps forward, that some specific bounds have already been obtained for the parameters characterizing the theories under investigation, and, more important, precise families of physical processes in which a violation of the linear nature of the standard formalism might emerge have been clearly identified and are the subject of systematic investigations which might lead, in the end, to relevant discoveries.
1. General Considerations
As stated already, a very natural question which all scientists who are concerned about the meaning and the value of science have to face, is whether one can develop a coherent worldview that can accommodate our knowledge concerning natural phenomena as it is embodied in our best theories. Such a program meets serious difficulties with quantum mechanics, essentially because of two formal aspects of the theory which are common to all of its versions, from the original nonrelativistic formulations of the 1920s, to the quantum field theories of recent years: the linear nature of the state space and of the evolution equation, i.e., the validity of the superposition principle and the related phenomenon of entanglement, which, in Schrödinger’s words:
is not one but the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought (Schrödinger, 1935, p. 807).
These two formal features have embarrassing consequences, since they imply
• objective chance in natural processes, i.e., the nonepistemic nature of quantum probabilities;
• objective indefiniteness of physical properties both at the micro and macro level;
• objective entanglement between spatially separated and non-interacting constituents of a composite system, entailing a sort of holism and a precise kind of nonlocality.
For the sake of generality, we shall first of all present a very concise sketch of ‘the rules of the quantum game’.
2. The Formalism: A Concise Sketch
Let us recall the axiomatic structure of quantum theory:
1. States of physical systems are associated with normalized vectors in a Hilbert space, a complex, infinite-dimensional, complete and separable linear vector space equipped with a scalar product. Linearity implies that the superposition principle holds: if \(\ket{f}\) is a state and \(\ket{g}\) is a state, then (for \(a\) and \(b\) arbitrary complex numbers) also
\[ \ket{K} = a\ket{f} + b\ket{g} \]
is a state. Moreover, the state evolution is linear, i.e., it preserves superpositions: if \(\ket{f,t}\) and \(\ket{g,t}\) are the states obtained by evolving the states \(\ket{f,0}\) and \(\ket{g,0}\), respectively, from the initial time \(t=0\) to the time \(t\), then \(a\ket{f,t} + b\ket{g,t}\) is the state obtained by the evolution of \(a\ket{f,0} + b\ket{g,0}\). Finally, the completeness assumption is made, i.e., that the knowledge of its statevector represents, in principle, the most accurate information one can have about the state of an individual physical system.
2. The observable quantities are represented by self-adjoint operators \(B\) on the Hilbert space. The associated eigenvalue equations \(B\ket{b_k} = b_k \ket{b_k}\) and the corresponding eigenmanifolds (the linear manifolds spanned by the eigenvectors associated to a given eigenvalue, also called eigenspaces) play a basic role for the predictive content of the theory. In fact:
1. The eigenvalues \(b_k\) of an operator \(B\) represent the only possible outcomes in a measurement of the corresponding observable.
2. The square of the norm (i.e., the length) of the projection of the normalized vector (i.e., of length 1) describing the state of the system onto the eigenmanifold associated to a given eigenvalue gives the probability of obtaining the corresponding eigenvalue as the outcome of the measurement. In particular, it is useful to recall that when one is interested in the probability of finding a particle at a given place, one has to resort to the so-called configuration space representation of the statevector. In such a case the statevector becomes a square-integrable function of the position variables of the particles of the system, whose modulus squared yields the probability density for the outcomes of position measurements.
We stress that, according to the above scheme, quantum mechanics makes only conditional probabilistic predictions (conditional on the measurement being actually performed) for the outcomes of prospective (and in general incompatible) measurement processes. Only if a state belongs already before the act of measurement to an eigenmanifold of the observable which is going to be measured, can one predict the outcome with certainty. In all other cases—if the completeness assumption is made—one has objective nonepistemic probabilities for different outcomes.
The orthodox position gives a very simple answer to the question: what determines the outcome when different outcomes are possible? Nothing—the theory is complete and, as a consequence, it is illegitimate to raise any question about possessed properties referring to observables for which different outcomes have non-vanishing probabilities of being obtained. Correspondingly, the referent of the theory are the results of measurement procedures. These are to be described in classical terms and involve in general mutually exclusive physical conditions.
As regards the legitimacy of attributing properties to physical systems, one could say that quantum mechanics warns us against requiring too many properties to be actually possessed by physical systems. However—with Einstein—one can adopt as a sufficient condition for the existence of an objective individual property that one be able (without in any way disturbing the system) to predict with certainty the outcome of a measurement. This implies that, whenever the overall statevector factorizes into the product of a state of the Hilbert space of the physical system \(S\) and of the rest of the world, \(S\) does possess some properties (actually a complete set of properties, i.e., those associated to appropriate maximal sets of commuting observables).
Before concluding this section we must add some comments about the measurement process. Quantum theory was created to deal with microscopic phenomena. In order to obtain information about them one must be able to establish strict correlations between the states of the microscopic systems and the states of objects we can perceive. Within the formalism, this is described by considering appropriate micro-macro interactions. The fact that when the measurement is completed one can make statements about the outcome is accounted for by the already mentioned WPR postulate (Dirac 1948): a measurement always causes a system to jump in an eigenstate of the observed quantity. Correspondingly, also the statevector of the apparatus ‘jumps’ into the manifold associated to the recorded outcome.
3. The Macro-Objectification Problem
In this section we shall clarify why the formalism we have just presented gives rise to the measurement or macro-objectification problem. To this purpose we shall, first of all, discuss the standard oversimplified argument based on the so-called von Neumann ideal measurement scheme.
Let us begin by recalling the basic points of the standard argument:
Suppose that a microsystem \(S\), just before the measurement of an observable \(B\), is in the eigenstate \(\ket{b_j}\) of the corresponding operator. The apparatus (a macrosystem) used to gain information about \(B\) is initially assumed to be in a precise macroscopic state, its ready state, corresponding to a definite macro property—e.g., its pointer points at 0 on a scale. Since the apparatus \(A\) is made of elementary particles, atoms and so on, it must be described by quantum mechanics, which will associate to it the state vector \(\ket{A_0}\). One then assumes that there is an appropriate system-apparatus interaction lasting for a finite time, such that when the initial apparatus state is triggered by the state \(\ket{b_j}\) it ends up in a final configuration \(\ket{A_j}\), which is macroscopically distinguishable from the initial one and from the other configurations \(\ket{A_k}\) in which it would end up if triggered by a different eigenstate \(\ket{b_k}\). Moreover, one assumes that the system is left in its initial state. In brief, one assumes that one can dispose things in such a way that the system-apparatus interaction can be described as:
\[\begin{align} \tag{1} \textit{(initial state)}{:}\ & \ket{b_k} \ket{A_0} \\ \textit{(final state)}{:}\ & \ket{b_k} \ket{A_k} \end{align}\]
Equation (1) and the hypothesis that the superposition principle governs all natural processes tell us that, if the initial state of the microsystem is a linear superposition of different eigenstates (for simplicity we will consider only two of them), one has:
\[\begin{align} \tag{2} \textit{(initial state)}{:}\ & (a\ket{b_k} + b\ket{b_j})\ket{A_0 } \\ \textit{(final state)}{:}\ & (a\ket{b_k} \ket{A_k} + b\ket{b_j} \ket{A_j}). \end{align}\]
Some remarks about this are in order:
• The scheme is highly idealized, both because it takes for granted that one can prepare the apparatus in a precise state, which is impossible since we cannot have control over all its degrees of freedom, and because it assumes that the apparatus registers the outcome without altering the state of the measured system. However, as we shall discuss below, these assumptions are by no means essential to derive the embarrassing conclusion we have to face, i.e., that the final state is a linear superposition of two states corresponding to two macroscopically different states of the apparatus. Since we know that the + representing linear superpositions cannot be replaced by the logical alternative either … or, the measurement problem arises: what meaning can one attach to a state of affairs in which two macroscopically and perceptively different states occur simultaneously?
• As already mentioned, the standard solution to this problem is given by the WPR postulate: in a measurement process reduction occurs: the final state is not the one appearing in the second line of equation (2) but, since macro-objectification takes place, it is
\[ \begin{align} \tag{3} \text{either } &\ket{b_k} \ket{A_k} \text{ with probability } \lvert a\rvert^2 \\ \text{or } &\ket{b_j} \ket{A_j} \text{ with probability } \lvert b\rvert^2. \end{align}\]
Nowadays, there is a general consensus that this solution is absolutely unacceptable for two basic reasons:
1. It corresponds to assuming that the linear nature of the theory is broken at a certain level. Thus, quantum theory is unable to explain how it can happen that the apparata behave as required by the WPR postulate (which is one of the axioms of the theory).
2. Even if one were to accept that quantum mechanics has a limited field of applicability, so that it does not account for all natural processes and, in particular, it breaks down at the macrolevel, it is clear that the theory does not contain any precise criterion for identifying the borderline between micro and macro, linear and nonlinear, deterministic and stochastic, reversible and irreversible. To use J.S. Bell’s words, there is nothing in the theory fixing such a borderline and the split between the two above types of processes is fundamentally shifty. As a matter of fact, if one looks at the historical debate on this problem, one can easily see that it is precisely by continuously resorting to this ambiguity about the split that adherents of the Copenhagen orthodoxy or easy solvers (Bell 1990) of the measurement problem have rejected the criticism of the heretics (Gottfried 2000). For instance, Bohr succeeded in rejecting Einstein’s criticisms at the Solvay Conferences by stressing that some macroscopic parts of the apparatus had to be treated fully quantum mechanically; von Neumann and Wigner displaced the split by locating it between the physical and the conscious (but what is a conscious being?), and so on. Also other proposed solutions to the problem, notably certain versions of many-worlds interpretations, suffer from analogous ambiguities.
It is not our task to review here the various attempts to solve the above difficulties. One can find many exhaustive treatments of this problem in the literature. On the contrary, we would like to discuss how the macro-objectification problem is indeed a consequence of very general, in fact unavoidable, assumptions on the nature of measurements, and not specifically of the assumptions of von Neumann’s model. This was established in a series of theorems of increasing generality, notably the ones by Fine (1970), d’Espagnat (1971), Shimony (1974), Brown (1986) and Busch and Shimony (1996). Possibly the most general and direct proof is given by Bassi and Ghirardi (2000), whose results we briefly summarize. The assumptions of the theorem are:
1. that a microsystem can be prepared in two different eigenstates of an observable (such as, e.g., the spin component along the z-axis) and in a superposition of two such states;
2. that one has a sufficiently reliable way of ‘measuring’ such an observable, meaning that when the measurement is triggered by each of the two above eigenstates, the process leads in the vast majority of cases to macroscopically and perceptually different situations of the universe. This requirement allows for cases in which the experimenter does not have perfect control of the apparatus, the apparatus is entangled with the rest of the universe, the apparatus makes mistakes, or the measured system is altered or even destroyed in the measurement process;
3. that all natural processes obey the linear laws of the theory.
From these very general assumptions one can show that, repeating the measurement on systems prepared in the superposition of the two given eigenstates, in the great majority of cases one ends up in a superposition of macroscopically and perceptually different situations of the whole universe. If one wishes to have an acceptable final situation, one mirroring the fact that we have definite perceptions, one is arguably compelled to break the linearity of the theory at an appropriate stage.
4. The Birth of Collapse Theories
The debate on the macro-objectification problem continued for many years after the early days of quantum mechanics. In the early 1950s an important step was taken by D. Bohm who presented (Bohm 1952) a mathematically precise deterministic completion of quantum mechanics (see the entry on Bohmian Mechanics). In the area of Collapse Theories, one should mention the contribution by Bohm and Bub (1966), which was based on the interaction of the statevector with Wiener-Siegel hidden variables. But let us come to Collapse Theories in the sense currently attached to this expression.
Various investigations during the 1970s can be considered as preliminary steps for the subsequent developments. In the years 1970 we were seriously concerned with quantum decay processes and in particular with the possibility of deriving, within a quantum context, the exponential decay law. For an exhaustive review of our approach see (Fonda, Ghirardi, and Rimini 1978). Some features of this approach are extremely relevant for the DRP. Let us list them:
• One deals with individual physical systems;
• The statevector is supposed to undergo random processes at random times, inducing sudden changes driving it either within the linear manifold of the unstable state or within the one of the decay products;
• To make the treatment quite general (the apparatus does not know which kind of unstable system it is testing) one is led to identify the random processes with localization processes of the relative coordinates of the decay fragments. Such an assumption, combined with the peculiar resonant dynamics characterizing an unstable system, yields, completely in general, the desired result. The ‘relative position basis’ is the preferred basis of this theory;
• Analogous ideas have been applied to measurement processes;
• The final equation for the evolution at the ensemble level is of the quantum dynamical semigroup type and has a structure extremely similar to the final one of the GRW theory.
Obviously, in these papers the reduction processes which are involved were not assumed to be ‘spontaneous and fundamental’ natural processes, but due to system-environment interactions. Accordingly, these attempts did not represent original proposals for solving the macro-objectification problem but they have paved the way for the elaboration of the GRW theory.
Almost in the same years, P. Pearle (1976, 1979), and subsequently N. Gisin (1984) and others, had entertained the idea of accounting for the reduction process in terms of a stochastic differential equation. These authors were really looking for a new dynamical equation and for a solution to the macro-objectification problem. Unfortunately, they were unable to give any precise suggestion about how to identify the states to which the dynamical equation should lead. Indeed, these states were assumed to depend on the particular measurement process one was considering. Without a clear indication on this point there was no way to identify a mechanism whose effect could be negligible for microsystems but extremely relevant for all the macroscopic ones. N. Gisin gave subsequently an interesting (though not uncontroversial) argument (Gisin 1989) that nonlinear modifications of the standard equation without stochasticity are unacceptable since they imply the possibility of sending superluminal signals. Soon afterwards, G. C. Ghirardi and R. Grassi proved that stochastic modifications without nonlinearity can at most induce ensemble and not individual reductions, i.e., they do not guarantee that the state vector of each individual physical system is driven in a manifold corresponding to definite properties.
5. The Original Collapse Model
As already mentioned, the Collapse Theory we are going to describe amounts to accepting a modification of the standard evolution law of the theory such that microprocesses and macroprocesses are governed by a single dynamics. Such a dynamics must imply that the micro-macro interaction in a measurement process leads to WPR. Bearing this in mind, recall that the characteristic feature distinguishing quantum evolution from WPR is that, while Schrödinger’s equation is linear and deterministic (at the wave function level), WPR is nonlinear and stochastic. It is then natural to consider, as was suggested for the first time in the above quoted papers by P. Pearle, the possibility of nonlinear and stochastic modifications of the standard Schrödinger dynamics. However, the initial attempts to implement this idea were unsatisfactory for various reasons. The first, which we have already discussed, concerns the choice of the preferred basis: if one wants to have a universal mechanism leading to reductions, to which linear manifolds should the reduction mechanism drive the statevector? Or, equivalently, which of the (generally) incompatible ‘potentialities’ of the standard theory should we choose to make actual? The second, referred to as the trigger problem by Pearle (1989), is the problem of how the reduction mechanism can become more and more effective in going from the micro to the macro domain. The solution to this problem constitutes the central feature of the Collapse Theories of the GRW type. To discuss these points, let us briefly review the first consistent Collapse model to appear in the literature.
Within such a model, originally referred to as QMSL (Quantum Mechanics with Spontaneous Localizations), the problem of the choice of the preferred basis is solved by noting that the most embarrassing superpositions, at the macroscopic level, are those involving different spatial locations of macroscopic objects. Actually, as Einstein has stressed, this is a crucial point which has to be faced by anybody aiming to take a macro-objective position about natural phenomena: ‘A macro-body must always have a quasi-sharply defined position in the objective description of reality’ (Born, 1971, p. 223). Accordingly, QMSL considers the possibility of spontaneous processes, which are assumed to occur instantaneously and at the microscopic level, which tend to suppress the linear superpositions of differently localized states. The required trigger mechanism must then follow consistently.
The key assumption of QMSL is the following: each elementary constituent of any physical system is subjected, at random times, to random and spontaneous localization processes (which we will call hittings) around appropriate positions. To have a precise mathematical model one has to be very specific about the above assumptions; in particular one has to make explicit HOW the process works, i.e., which modifications of the wave function are induced by the localizations, WHERE it occurs, i.e., what determines the occurrence of a localization at a certain position rather than at another one, and finally WHEN, i.e., at what times, it occurs. The answers to these questions are as follows.
Let us consider a system of \(N\) distinguishable particles and let us denote by \(F(\boldsymbol{q}_1, \boldsymbol{q}_2 , \ldots ,\boldsymbol{q}_N )\) the coordinate representation (wave function) of the state vector (we disregard spin variables since hittings are assumed not to act on them).
1. The answer to the question HOW is then: if a hitting occurs for the \(i\)-th particle at point \(\boldsymbol{x}\), the wave function is instantaneously multiplied by a Gaussian function (appropriately normalized) \[ G(\boldsymbol{q}_i, \boldsymbol{x}) = K \exp[-\{1/(2d^2)\}(\boldsymbol{q}_i -\boldsymbol{x})^2], \]
where \(d\) represents the localization accuracy. Let us denote as
\[ L_i (\boldsymbol{q}_1, \boldsymbol{q}_2, \ldots, \boldsymbol{q}_N ; \boldsymbol{x}) = F(\boldsymbol{q}_1, \boldsymbol{q}_2, \ldots, \boldsymbol{q}_N) G(\boldsymbol{q}_i, \boldsymbol{x}) \]
the wave function immediately after the localization, as yet unnormalized.
2. As concerns the specification of WHERE the localization occurs, it is assumed that the probability density \(P(\boldsymbol{x})\) of its taking place at the point \(\boldsymbol{x}\) is given by the square of the norm of the state \(L_i\) (the length, or to be more precise, the integral of the modulus squared of the function \(L_i\) over the \(3N\)-dimensional space). This implies that hittings occur with higher probability at those places where, in the standard quantum description, there is a higher probability of finding the particle. Note that the above prescription introduces nonlinear and stochastic elements in the dynamics. The constant \(K\) appearing in the expression of \(G(\boldsymbol{q}_i, \boldsymbol{x})\) is chosen in such a way that the integral of \(P(\boldsymbol{x})\) over the whole space equals 1.
3. Finally, the question WHEN is answered by assuming that the hittings occur at randomly distributed times, according to a Poisson distribution, with mean frequency \(f\).
It is straightforward to convince oneself that the hitting process leads, when it occurs, to the suppression of the linear superpositions of states in which the same particle is well localized at different positions separated by a distance greater than \(d\). As a simple example we can consider a single particle whose wavefunction is different from zero only in two small and far apart regions \(h\) and \(t\). Suppose that a localization occurs around \(h\); the state after the hitting is then appreciably different from zero only in a region around \(h\) itself. A completely analogous argument holds for the case in which the hitting takes place around \(t\). As concerns points which are far from both \(h\) and \(t\), one easily sees that the probability density for such hittings , according to the multiplication rule determining \(L_i\), turns out to be practically zero, and moreover, that if such a hitting were to occur, after the wave function is normalized, the wave function of the system would remain almost unchanged.
We can now discuss the most important feature of the theory, i.e., the Trigger Mechanism. To understand the way in which the spontaneous localization mechanism is enhanced by increasing the number of particles which are in far apart spatial regions (as compared to \(d)\), one can consider, for simplicity, the superposition \(\ket{S}\), with equal weights, of two macroscopic pointer states \(\ket{H}\) and \(\ket{T}\), corresponding to two different pointer positions \(H\) and \(T\), respectively. Taking into account that the pointer is ‘almost rigid’ and contains a macroscopic number \(N\) of microscopic constituents, the state can be written, in obvious notation, as:
\[\tag{4} \ket{S} = [\ket{1 \near h_1} \ldots \ket{N \near h_N} + \ket{1 \near t_1} \ldots \ket{N \near t_N}], \]
where \(h_i\) is near \(H\), and \(t_i\) is near \(T\). The states appearing in first term on the right-hand side of equation (4) have coordinate representations which are different from zero only when their arguments \((1,\ldots ,N)\) are all near \(H\), while those of the second term are different from zero only when they are all near \(T\). It is now evident that if any of the particles (say, the \(i\)-th particle) undergoes a hitting process, e.g., near the point \(h_i\), the multiplication prescription leads practically to the suppression of the second term in (4). Thus any spontaneous localization of any of the constituents amounts to a localization of the pointer. The hitting frequency is therefore effectively amplified proportionally to the number of constituents. Notice that, for simplicity, the argument makes reference to an almost rigid body, i.e., to one for which all particles are around \(H\) in one of the states of the superposition and around \(T\) in the other. It should however be obvious that what really matters in amplifying the reductions is the number of particles which are in different positions in the two states appearing in the superposition itself.
Under these premises we can now proceed to choose the parameters \(d\) and \(f\) of the theory, i.e., the localization accuracy and the mean localization frequency. The argument just given allows one to understand how one can choose the parameters in such a way that the quantum predictions for microscopic systems remain fully valid while the embarrassing macroscopic superpositions in measurement-like situations are suppressed in very short times. Accordingly, as a consequence of the unified dynamics governing all physical processes, individual macroscopic objects acquire definite macroscopic properties. The choice suggested in the GRW-model is:
\[\begin{align} \tag{5} f &= 10^{-16} \text{ s}^{-1} \\ d &= 10^{-5} \text{ cm} \end{align}\]
It follows that a microscopic system undergoes a localization, on average, every hundred million years, while a macroscopic one undergoes a localization every \(10^{-7}\) seconds. With reference to the challenging version of the macro-objectification problem presented by Schrödinger with the famous example of his cat, J.S. Bell comments (1987, p.44): [within QMSL] the cat is not both dead and alive for more than a split second. Besides the extremely low frequency of the hittings for microscopic systems, also the fact that the localization width is large compared to the dimensions of atoms (so that even when a localization occurs it does very little violence to the internal economy of an atom) plays an important role in guaranteeing that no violation of well-tested quantum mechanical predictions is implied by the modified dynamics.
Some remarks are appropriate. QMSL, being precisely formulated, allows to locate precisely the ‘split’ between micro and macro, reversible and irreversible, quantum and classical. The transition between the two types of ‘regimes’ is governed by the number of particles which are well localized at positions further apart than \(10^{-5}\) cm in the two states whose coherence is going to be dynamically suppressed. In principle, the model is testable against quantum mechanics. However, for the above choice of the values of the parameters, its predictions do not contradict any already established fact about microsystems and macrosystems.
Concerning the choice of the parameters of the model, it has to be stressed that, as it is obvious, the just mentioned quantum to classical transition region depends crucially on their values. The situation concerning the two parameters is rather different; in fact \(d\) cannot be made smaller than \(10^{-5}\) cm without inducing unacceptable effects on the internal dynamics, e.g., of solids, and it cannot be made much larger if one wants macrosystems to end up being rather accurately localized. On the contrary, an appreciable variation of \(f\) turns out to be possible. With reference to this point we would like to mention that Adler (2003) has suggested to change its value by a factor of the order of \(10^9\). The reasons for this derive from pretending that the latent image formation in photography occurs immediately after a grain of the emulsion has been excited, and that when a human eye is hit by few photons (the perceptual threshold being very low) reduction takes place in the rods of the eye. As we will discuss in what follows, if one takes the original GRW value for \(f\), reduction cannot occur in the rods (because a relatively small number of molecules—less than \(10^5\)—are affected), but only during the transmission along the nervous signal within the brain, a process which involves the displacement of a number of ions of the order of \(10^{12}\).
It is interesting to remark that the drastic change suggested by Adler (2003) has physical implications which have already been experimentally falsified, see Curceanu et al. 2015, Bassi et al. 2010, Vinante et al. 2015 (Other Internet Resources), and Toros & Bassi 2016 (Other Internet Ressources).
6. The Continuous Spontaneous Localization Model (CSL)
The model just presented (QMSL) has a serious drawback: it does not allow to deal with systems containing identical constituents because it does not respect the symmetry or antisymmetry requirements for such particles. A quite natural idea to overcome this difficulty would be that of relating the hitting process not to the individual particles but to the particle number density averaged over an appropriate volume. This can be done by introducing a new phenomenological parameter in the theory which however can be eliminated by an appropriate limiting procedure (see below).
Another way to overcome this problem derives from injecting the physically appropriate principles of the GRW model within the original approach of P. Pearle. This line of thought has led to a quite elegant formulation of a dynamical reduction model, usually referred to as CSL (Pearle 1989; Ghirardi, Pearle, and Rimini 1990) in which the discontinuous jumps which characterize QMSL are replaced by a continuous stochastic evolution in the Hilbert space (a sort of Brownian motion of the statevector).
We will not enter into the rather technical details of this interesting development of the original GRW proposal, since the basic ideas and physical implications are precisely the same as those of the original formulation. Actually, one could argue that the above idea of tackling the problem of identical particles by considering the average particle number within an appropriate volume is correct. In fact it has been proved (Ghirardi, Pearle, and Rimini 1990) that for any CSL dynamics there is a hitting dynamics which, from a physical point of view, is ‘as close to it as one wants’. Instead of entering into the details of the CSL formalism, it is useful, for the discussion below, to analyze a simplified version of it.
7. A Simplified Version of CSL
With the aim of understanding the physical implications of the CSL model, such as the rate of suppression of coherence, we make now some simplifying assumptions. First, we assume that we are dealing with only one kind of particles (e.g., the nucleons), secondly, we disregard the standard Schrödinger term in the evolution and, finally, we divide the whole space in cells of volume \(d^3\). We denote by \(\ket{n_1, n_2 ,\ldots}\) a Fock state in which there are \(n_i\) particles in cell \(i\), and we consider a superposition of two states \(\ket{n_1, n_2 , \ldots}\) and \(\ket{m_1, m_2 , \ldots}\) which differ in the occupation numbers of the various cells of the universe. With these assumptions it is quite easy to prove that the rate of suppression of the coherence between the two states (so that the final state is one of the two and not their superposition) is governed by the quantity:
\[\tag{6} \exp\{-f [(n_1 - m_1)^2 + (n_2 - m_2)^2 +\ldots]t\}, \]
all cells of the universe appearing in the sum within the square brackets in the exponent. Apart from differences relating to the identity of the constituents, the overall physics is quite similar to that implied by QMSL.
Equation 6 offers the opportunity of discussing the possibility of relating the suppression of coherence to gravitational effects. In fact, with reference to this equation we notice that the worst case scenario (from the point of view of the time necessary to suppress coherence) is the one corresponding to the superposition of two states for which the occupation numbers of the individual cells differ only by one unit. Indeed, in this case the amplifying effect of taking the square of the differences disappears. Let us then raise the question: how many nucleons (at worst) should occupy different cells, in order for the given superposition to be dynamically suppressed within the time which characterizes human perceptual processes? Since such a time is of the order of \(10^{-2}\) sec and \(f = 10^{-16}\) sec\(^{-1}\), the number of displaced nucleons must be of the order of \(10^{18}\), which corresponds, to a remarkable accuracy, to a Planck mass. This figure seems to point in the same direction as Penrose’s attempts to relate reduction mechanisms to quantum gravitational effects (Penrose 1989).
Obviously, the model theory we are discussing implies various further physical effects which deserve to be discussed since they might allow a test of the theory with respect to standard quantum mechanics. For review, see (Bassi and Ghirardi 2003; Adler 2007, Bassi et al. 2013). We briefly list the most promising type of experiments which in the future might allow such a crucial test.
1. Effects in superconducting devices. A detailed analysis has been presented in (Ghirardi and Rimini 1990). As shown there and as follows from estimates about possible effects for superconducting devices (Rae 1990; Gallis and Fleming 1990; Rimini 1995), and for the excitation of atoms (Squires 1991), it turns out not to be possible, with present technology, to perform clear-cut experiments allowing to discriminate the model from standard quantum mechanics.
2. Loss of coherence in diffraction experiments with macromolecules. The group of Arndt and Zeilinger in Vienna has performed several diffraction experiments involving macromolecules.The most well known include C\(_{60}\), (720 nucleons) (Arndt et al. 1999), C\(_{70}\), (840 nucleons) (Hackermueller et al. 2004) and C\(_{30}\)H\(_{12}\)F\(_{30}\)N\(_2\)O\(_4\), (1030 nucleons) (Gerlich et al. 2007). These experiments aim at testing the validity of the superposition principle towards the macroscopic scale. The challenge is very exciting and near-future technology will probably allow to perform experiments with systems containing up to \(10 ^6\) nucleons and, accordingly, they will represent those imposing most severe limitations to the parameters of Collapse theories.
3. Loss of coherence in opto-mechanical interferometers. Recently, an interesting proposal of testing the superposition principle by resorting to an experimental set-up involving a (mesoscopic) mirror has been advanced (Marshall et al. 2003). This stimulating proposal has led a group of scientists directly interested in Collapse Theories (Bassi et al. 2005) to check whether the proposed experiment might be a crucial one for testing dynamical reduction models versus quantum mechanics. The problem is extremely subtle because the extension of the oscillations of the mirror is much smaller than the localization accuracy of GRW, so that the localizations processes become almost ineffective. However, quite recently a detailed reconsideration of the physics of such systems has been performed and it has allowed to draw the relevant conclusion that the proposal by Adler (2007) of changing the frequency of the GRW theory of a factor like the one he has considered is untenable.
4. Spontaneous X-ray emission from Germanium. Collapse models not only forbid macroscopic superpositions to be stable, they share several other features which are forbidden by the standard theory. One of these is the spontaneous emission of radiation from otherwise stable systems, like atoms. While the standard theory predicts that such systems—if not excited—do not emit radiation, collapse models allow for radiation to be produced. The emission rate has been computed both for free charged particles (Fu 1997) and for hydrogenic atoms (Adler et al. 2007). The theoretical predictions were compatible with current experimental data (Fu 1997). At any rate, the importance of such experiments lies in the fact that—so far—they provide the strongest upper bounds on the collapse parameters (Adler et al. 2007). But this is not the whole story: very recently Curceanu et al, 2015, following this line of research, have been able to prove experimentally that the proposal by Adler (2007) of a drastic change of the frequency of the localizations with respect to those of the original GRW paper is definitely incompatible with the experimental data.
5. In the recent years, another line of research has been proposed, one which makes direct reference to the way, which we will discuss in Section 10, in which collapse models account for the psycho-physical correspondence. The suggested approach might lead to completely new and fundamentally different practical tests of Collapse theories. The basic facts concerning the proposal deserve to be mentioned. In almost all physical situations we have analyzed, the appreciable dynamical changes of the system (tipically, the spreading of the center-of-mass position of a macroscopic object) take a time (years) which is enormously longer than the one between two localizations \((10^{-7}\) sec). On the contrary, as we will discuss below, in the case of conscious perceptions, the collapse time of two brain states in a superposition and the time which is necessary for the emergence of a definite perception, are quite similar, and this has some (small but significant) implications concerning the probabilities of the outcomes. This point has been analyzed in detail and explicitly evaluated by resorting to a simple model of a quantum system subjected to reduction processes (Ghirardi et al., 2014). The idea is to consider a spin 1/2 particle whose spin rotates around the \(x\)-axis with a frequency of about one hundreth of the one of the random measurements ascertaining whether its spin is UP or DOWN with respect to the \(z\)-axis. It turns out that for a superposition with amplitudes \(a\) and \(b\) of the two eigenstates of S\(_z\), the probability of the two supervening perceptions associated to the two outcomes will differ of about 1% from those predicted by quantum mechanics, i.e. \(\lvert a\rvert^2\) and \(\lvert b\rvert^2\), respectively.
The test would be quite interesting also for the general meaning of collapse theories because it will give some practical evidence concerning the fact that, in the case in which a superposition of two microscopic different states which are able to trigger two precise (and different) perceptions, the brain actually collapses the wavefunction yielding only one perception, an clear-cut indication that the standard theory cannot run the whole process.
Summarizing, we stress that, due to recent technological improvements, experiments in which one might test the deviations from Standard Quantum Theory implied by Collapse Models, seems to have become more feasible. Actually, lot of work has been done and it is still going on in this direction. The subject is developing rapidly and important papers have appeared and interesting experimental work has been and it is being performed. For a detailed technical analysis and for a precise specification of the limits for the parameters \(d\) and \(f\) which have been derived, we refer the reader to the papers by Bassi et al. (2013), Donadi et al. (2013 a,b), Baharami et al. (2014), Großardt et al. (2015, Other Internet Resources), Vinante et al. (2015).
8. Some remarks about Collapse Theories
A. Pais famously recalls in his biography of Einstein:
We often discussed his notions on objective reality. I recall that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it (Pais 1982, p. 5).
In the context of Einstein’s remarks in Albert Einstein, Philosopher-Scientist (Schilpp 1949), we can regard this reference to the moon as an extreme example of ‘a fact that belongs entirely within the sphere of macroscopic concepts’, as is also a mark on a strip of paper that is used to register the outcome of a decay experiment, so that
as a consequence, there is hardly likely to be anyone who would be inclined to consider seriously […] that the existence of the location is essentially dependent upon the carrying out of an observation made on the registration strip. For, in the macroscopic sphere it simply is considered certain that one must adhere to the program of a realistic description in space and time; whereas in the sphere of microscopic situations one is more readily inclined to give up, or at least to modify, this program (p. 671).
the ‘macroscopic’ and the ‘microscopic’ are so inter-related that it appears impracticable to give up this program in the ‘microscopic’ alone (p. 674).
One might speculate that Einstein would not have taken the DRP seriously, given that it is a fundamentally indeterministic program. On the other hand, the DRP allows precisely for this middle ground, between giving up a ‘classical description in space and time’ altogether (the moon is not there when nobody looks), and requiring that it be applicable also at the microscopic level (as within some kind of ‘hidden variables’ theory). It would seem that the pursuit of ‘realism’ for Einstein was more a program that had been very successful rather than an a priori commitment, and that in principle he would have accepted attempts requiring a radical change in our classical conceptions concerning microsystems, provided they would nevertheless allow to take a macrorealist position matching our definite perceptions at this scale.
In the DRP, we can say of an electron in an EPR-Bohm situation that ‘when nobody looks’, it has no definite spin in any direction , and in particular that when it is in a superposition of two states localised far away from each other, it cannot be thought to be at a definite place (see, however, the remarks in Section 11). In the macrorealm, however, objects do have definite positions and are generally describable in classical terms. That is, in spite of the fact that the DRP program is not adding ‘hidden variables’ to the theory, it implies that the moon is definitely there even if no sentient being has ever looked at it. In the words of J. S. Bell, the DRP
allows electrons (in general microsystems) to enjoy the cloudiness of waves, while allowing tables and chairs, and ourselves, and black marks on photographs, to be rather definitely in one place rather than another, and to be described in classical terms (Bell 1986, p. 364).
Such a program, as we have seen, is implemented by assuming only the existence of wave functions, and by proposing a unified dynamics that governs both microscopic processes and ‘measurements’. As regards the latter, no vague definitions are needed. The new dynamical equations govern the unfolding of any physical process, and the macroscopic ambiguities that would arise from the linear evolution are theoretically possible, but only of momentary duration, of no practical importance and no source of embarrassment.
We have not yet analyzed the implications about locality, but since in the DRP program no hidden variables are introduced, the situation can be no worse than in ordinary quantum mechanics: ‘by adding mathematical precision to the jumps in the wave function’, the GRW theory ‘simply makes precise the action at a distance of ordinary quantum mechanics’ (Bell 1987, p. 46). Indeed, a detailed investigation of the locality properties of the theory becomes possible as shown by Bell himself (Bell 1987, p. 47). Moreover, as it will become clear when we will discuss the interpretation of the theory in terms of mass density, the QMSL and CSL theories lead in a natural way to account for a behaviour of macroscopic objects corresponding to our definite perceptions about them, the main objective of Einstein’s requirements.
The achievements of the DRP which are relevant for the debate about the foundations of quantum mechanics can also be concisely summarized in the words of H.P. Stapp:
The collapse mechanisms so far proposed could, on the one hand, be viewed as ad hoc mutilations designed to force ontology to kneel to prejudice. On the other hand, these proposals show that one can certainly erect a coherent quantum ontology that generally conforms to ordinary ideas at the macroscopic level (Stapp 1989, p. 157).
9. Relativistic Dynamical Reduction Models
As soon as the GRW proposal appeared and attracted the attention of J.S. Bell it also stimulated him to look at it from the point of view of relativity theory. As he stated subsequently (Bell 1989a):
When I saw this theory first, I thought that I could blow it out of the water, by showing that it was grossly in violation of Lorentz invariance. That’s connected with the problem of ‘quantum entanglement’, the EPR paradox.
Actually, he had already investigated this point by studying the effect on the theory of a transformation mimicking a nonrelativistic approximation of a Lorentz transformation and he arrived (Bell 1987) at a surprising conclusion:
… the model is as Lorentz invariant as it could be in its nonrelativistic version. It takes away the ground of my fear that any exact formulation of quantum mechanics must conflict with fundamental Lorentz invariance.
What Bell had actually proved by resorting to a two-times formulation of the Schrödinger equation is that the model violates locality by violating outcome independence and not, as deterministic hidden variable theories do, parameter independence.
Indeed, with reference to this point we recall that, as is well known, (Suppes and Zanotti 1976; van Fraassen 1982; Jarrett 1984; Shimony 1983; see also the entry on Bell’s Theorem), Bell’s locality assumption is equivalent to the conjunction of two other assumptions, viz., in Shimony’s terminology, parameter independence and outcome independence. In view of the experimental violation of Bell’s inequality, one has to give up either or both of these assumptions. The above splitting of the locality requirement into two logically independent conditions is particularly useful in discussing the different status of CSL and deterministic hidden variable theories with respect to relativistic requirements. Actually, as proved by Jarrett himself, when parameter independence is violated, if one had access to the variables which specify completely the state of individual physical systems, one could send faster-than-light signals from one wing of the apparatus to the other. Moreover, in Ghirardi and Grassi (1996) it has been proved that it is impossible to build a genuinely relativistically invariant theory which, in its nonrelativistic limit, exhibits parameter dependence. Here we use the term genuinely invariant to denote a theory for which there is no (hidden) preferred reference frame. On the other hand, if locality is violated only by the occurrence of outcome dependence then faster-than-light signaling cannot be achieved (Eberhard 1978; Ghirardi, Rimini, and Weber 1980). Few years after the just mentioned proof by Bell, it has been shown in complete generality (Ghirardi, Grassi, Butterfield, and Fleming 1993) that the GRW and CSL theories, just as standard quantum mechanics, exhibit only outcome dependence. This is to some extent encouraging and shows that there are no reasons of principle making unviable the project of building a relativistically invariant DRM.
Let us be more specific about this crucial problem. P. Pearle was the first to propose (Pearle 1990) a relativistic generalization of CSL to a quantum field theory describing a fermion field coupled to a meson scalar field enriched with the introduction of stochastic and nonlinear terms. A quite detailed discussion of this proposal was presented in (Ghirardi et al. 1990a) where it was shown that the theory enjoys of all properties which are necessary in order to meet the relativistic constraints. Pearle’s approach requires the precise formulation of the idea of stochastic Lorentz invariance. The proposal can be summarized in the following terms:
One considers a fermion field coupled to a meson field and puts forward the idea of inducing localizations for the fermions through their coupling to the mesons and a stochastic dynamical reduction mechanism acting on the meson variables. In practice, one considers Heisenberg evolution equations for the coupled fields and a Tomonaga-Schwinger CSL-type evolution equation with a skew-hermitian coupling to a c-number stochastic potential for the state vector. This approach has been systematically investigated by Ghirardi, Grassi, and Pearle (1990), to which we refer the reader for a detailed discussion. Here we limit ourselves to stressing that, under certain approximations, one obtains in the non-relativistic limit a CSL-type equation inducing spatial localization. However, due to the white noise nature of the stochastic potential, novel renormalization problems arise: the increase per unit time and per unit volume of the energy of the meson field is infinite due to the fact that infinitely many mesons are created. This point has also been lucidly discussed by Bell (1989b) in the talk he delivered at Trieste on the occasion of the 25th anniversary of the International Centre for Theoretical Physics. This talk appeared under the title The Trieste Lecture of John Stewart Bell. For these reasons one cannot consider this as a satisfactory example of a relativistic reduction model.
In the years following the just mentioned attempts there has been a flourishing of researches aimed at getting the desired result. Let us briefly comment about them. As already mentioned, the source of the divergences is the assumption of point interactions between the quantum field operators in the dynamical equation for the statevector, or, equivalently, the white character of the stochastic noise. Having this aspect in mind P. Pearle (1989), L. Diosi (1990) and A. Bassi and G.C. Ghirardi (2002) reconsidered the problem from the beginning by investigating nonrelativistic theories with nonwhite Gaussian noises. The problem turns out to be very difficult from the mathematical point of view, but steps forward have been made. In recent years, a precise formulation of the nonwhite generalization (Bassi and Ferialdi 2009) of the so-called QMUPL model, which represents a simplified version of GRW and CSL, has been proposed. Moreover, a perturbative approach for the CSL model has been worked out (Adler and Bassi 2007, 2008). Further work is necessary. This line of thought is very interesting at the nonrelativistic level; however, it is not yet clear whether it will lead to a real step forward in the development of relativistic theories of spontaneous collapse.
In the same spirit, Nicrosini and Rimini (Nicrosini 2003) tried to smear out the point interactions without success because, in their approach, a preferred reference frame had to be chosen in order to circumvent the nonintegrability of the Tomonaga-Schwinger equation
Also other interesting and different approaches have been suggested. Among them we mention the one by Dove and Squires (Dove 1996) based on discrete rather than continuous stochastic processes and those by Dawker and Herbauts (Dawker 2004a) and Dawker and Henson (Dawker 2004b) formulated on a discrete space-time.
Before going on we consider it important to call attention to the fact that precisely in the same years similar attempts to get a relativistic generalization of the other existing ‘exact’ theory, i.e., Bohmian Mechanics, were going on and that they too have encountered some difficulties. Relevant steps are represented by a paper (Dürr 1999) resorting to a preferred spacetime slicing, by the investigations of Goldstein and Tumulka (Goldstein 2003) and by other scientists (Berndl et. al 1996). However, we must recognize that no one of these attempts has led to a fully satisfactory solution of the problem of having a theory without observers, like Bohmian mechanics, which is perfectly satisfactory from the relativistic point of view, precisely due to the fact that they are not genuinely Lorentz invariant in the sense we have made precise before. Mention should be made also of the attempt by Dewdney and Horton (Dewdney 2001) to build a relativistically invariant model based on particle trajectories.
Let us come back to the relativistic DRP. Some important changes have occurred quite recently. Tumulka (2006a) succeeded in proposing a relativistic version of the GRW theory for N non-interacting distinguishable particles, based on the consideration of a multi-time wavefunction whose evolution is governed by Dirac like equations and adopts as its Primitive Ontology (see the next section) the one which attaches a primary role to the space and time points at which spontaneous localizations occur, as originally suggested by Bell (1987). To my knowledge this represents the first proposal of a relativistic dynamical reduction mechanism which satisfies all relativistic requirements. In particular it is divergence free and foliation independent. However it can deal only with systems containing a fixed number of noninteracting fermions.
At this point explicit mention should be made of the most recent steps which concern our problem. D. Bedingham (2011) following strictly the original proposal by Pearle (1990) of a quantum field theory inducing reductions based on a Tomonaga-Schwinger equation, has worked out an analogous model which, however, overcomes the difficulties of the original model. In fact, Bedingham has circumvented the crucial problems deriving from point interactions by (paying the price of) introducing, besides the fields characterizing the Quantum Field Theories he is interested in, an auxiliary relativistic field that amounts to a smearing of the interactions whilst preserving Lorentz invariance and frame independence. Adopting this point of view and taking advantage also of the proposal by Ghirardi (2000) concerning the appropriate way to define objective properties at any space-time point \(x\), he has been able to work out a fully satisfactory and consistent relativistic scheme for quantum field theories in which reduction processes may occur.
It has also to be mentioned that, taking once more advantage of the ideas of the paper by Ghirardi (2000), various of the just quoted authors (see Bedingham et al. 2013), have been able to prove that it is possible to work out a relativistic generalization of Collapse models when their primitive ontology is taken to be the one given by the mass density interpretation for the nonrelativistic case we will present in what follows.
In view of these results and taking into account the interesting investigations concerning relativistic Bohmian-like theories,the conclusions that Tumulka has drawn concerning the status of attempts to account for the macro-objectification process from a relativistic perspective are well-founded:
A somewhat surprising feature of the present situation is that we seem to arrive at the following alternative: Bohmian mechanics shows that one can explain quantum mechanics, exactly and completely, if one is willing to pay with using a preferred slicing of spacetime; our model suggests that one should be able to avoid a preferred slicing of spacetime if one is willing to pay with a certain deviation from quantum mechanics,
a conclusion that he has rephrased and reinforced in (Tumulka 2006c):
Thus, with the presently available models we have the alternative: either the conventional understanding of relativity is not right, or quantum mechanics is not exact.
Very recently, a thorough and illuminating discussion of the important approach by Tumulka has been presented by Tim Maudlin (2011) in the third revised edition of his book Quantum Non-Locality and Relativity. Tumulka’s position is perfectly consistent with the present ideas concerning the attempts to transform relativistic standard quantum mechanics into an ‘exact’ theory in the sense which has been made precise by J. Bell. Since the only unified, mathematically precise and formally consistent formulations of the quantum description of natural processes are Bohmian mechanics and GRW-like theories, if one chooses the first alternative one has to accept the existence of a preferred reference frame, while in the second case one is not led to such a drastic change of position with respect to relativistic concepts but must accept that the ensuing theory disagrees with the predictions of quantum mechanics and acquires the status of a rival theory with respect to it.
In spite of the fact that the situation is, to some extent, still open and requires further investigations, it has to be recognized that the efforts which have been spent on such a program have made possible a better understanding of some crucial points and have thrown light on some important conceptual issues. First, they have led to a completely general and rigorous formulation of the concept of stochastic invariance. Second, they have prompted a critical reconsideration, based on the discussion of smeared observables with compact support, of the problem of locality at the individual level. This analysis has brought out the necessity of reconsidering the criteria for the attribution of objective local properties to physical systems. In specific situations, one cannot attribute any local property to a microsystem: any attempt to do so gives rise to ambiguities. However, in the case of macroscopic systems, the impossibility of attributing to them local properties (or, equivalently, the ambiguity associated to such properties) lasts only for time intervals of the order of those necessary for the dynamical reduction to take place. Moreover, no objective property corresponding to a local observable, even for microsystems, can emerge as a consequence of a measurement-like event occurring in a space-like separated region: such properties emerge only in the future light cone of the considered macroscopic event. Finally, recent investigations (Ghirardi and Grassi 1996; Ghirardi 2000) have shown that the very formal structure of the theory is such that it does not allow, even conceptually, to establish cause-effect relations between space-like events.
The conclusion of this section, is that the question of whether a relativistic dynamical reduction program can find a satisfactory formulation seems to admit a positive answer.
A last comment. Recently, a paper by Conway and Kochen (Conway 2006, 2006b), which has raised a lot of interest, has been published. A few words about it are in order, to clarify possible misunderstandings. The first and most important aim of the paper is the derivation of what the authors have called The Free Will Theorem, putting forward the provocative idea that if human beings are free to make their choices about the measurements they will perform on one of a pair of far-away entangled particles, then one must admit that also the elementary particles involved in the experiment have free will. One might make several comments on this statement. For what concerns us here the relevant fact is that the authors claim that their theorem implies, as a byproduct, the impossibility of elaborating a relativistically invariant dynamical reduction model. A lively debate has arisen. At the end, Goldstein et al (Goldstein 2010) have made clear why the argument of Conway and Kochen is not pertinent. We may conclude that nothing in principle forbids a perfectly satisfactory relativistic generalization of the GRW theory, and, actually, as repeatedly stressed, there are many elements which indicate that this is actually feasible.
10. Collapse Theories and Definite Perceptions
Some authors (Albert and Vaidman 1989; Albert 1990, 1992) have raised an interesting objection concerning the emergence of definite perceptions within Collapse Theories. The objection is based on the fact that one can easily imagine situations leading to definite perceptions, that nevertheless do not involve the displacement of a large number of particles up to the stage of the perception itself. These cases would then constitute actual measurement situations which cannot be described by the GRW theory, contrary to what happens for the idealized (according to the authors) situations considered in many presentations of it, i.e., those involving the displacement of some sort of pointer. To be more specific, the above papers consider a ‘measurement-like’ process whose output is the emission of a burst of few photons triggered by the position in which a particle hits a screen. This can easily be devised by considering, e.g., a Stern-Gerlach set-up in which a spin 1/2 microsystem, according to the value of its spin component hits a fluorescent screen in different places and excites a small number of atoms which subsequently decay, emitting a small number of photons. The argument goes as follows: if one triggers the apparatus with a superposition of two spin states, since only a few atoms are excited, since the excitations involve displacements which are smaller than the characteristic localization distance of GRW, since GRW does not induce reductions on photon states and, finally, since the photon states immediately overlap, there is no way for the spontaneous localization mechanism to become effective in suppressing the ensuing superposition of the states ‘photons emerging from point \(A\) of the screen’ and ‘photons emerging from point \(B\) of the screen’. On the other hand, since the visual perception threshold is quite low (about 6-7 photons), there is no doubt that the naked eye of a human observer is sufficient to detect whether the luminous spot on the screen is at \(A\) or at \(B\). The conclusion follows: in the case under consideration no dynamical reduction can take place and as a consequence no measurement is over, no outcome is definite, up to the moment in which a conscious observer perceives the spot.
Aicardi et al. (1991) have presented a detailed answer to this criticism. The crucial points of the argument are the following: it is agreed that in the case considered the superposition persists for long times (actually the superposition must persist, since, the system under consideration being microscopic, one could perform interference experiments which everybody would expect to confirm quantum mechanics). However, to deal in the appropriate and correct way with such a criticism, one has to consider all the systems which enter into play (electron, screen, photons and brain) and the universal dynamics governing all relevant physical processes. A simple estimate of the number of ions which are involved in the transmission of the nervous signal up to the higher virtual cortex makes perfectly plausible that, in the process, a sufficient number of particles are displaced by a sufficient spatial amount to satisfy the conditions under which, according to the GRW theory, the suppression of the superposition of the two nervous signals will take place within the time scale of perception.
To avoid misunderstandings, this analysis by no means amounts to attributing a special role to the conscious observer or to perception. The observer’s brain is the only system present in the set-up in which a superposition of two states involving different locations of a large number of particles occurs. As such it is the only place where the reduction can and actually must take place according to the theory. It is extremely important to stress that if in place of the eye of a human being one puts in front of the photon beams a spark chamber or a device leading to the displacement of a macroscopic pointer, or producing ink spots on a computer output, reduction will equally take place. In the given example, the human nervous system is simply a physical system, a specific assembly of particles, which performs the same function as one of these devices, if no other such device interacts with the photons before the human observer does. It follows that it is incorrect and seriously misleading to claim that the GRW theory requires a conscious observer in order that measurements have a definite outcome.
A further remark may be appropriate. The above analysis could be taken by the reader as indicating a very naive and oversimplified attitude towards the deep problem of the mind-brain correspondence. There is no claim and no presumption that GRW allows a physicalist explanation of conscious perception. It is only pointed out that, for what we know about the purely physical aspects of the process, one can state that before the nervous pulses reach the higher visual cortex, the conditions guaranteeing the suppression of one of the two signals are verified. In brief, a consistent use of the dynamical reduction mechanism in the above situation accounts for the definiteness of the conscious perception, even in the extremely peculiar situation devised by Albert and Vaidman.
11. The Interpretation of the Theory and its Primitive Ontologies
As stressed in the opening sentences of this contribution, the most serious problem of standard quantum mechanics lies in its being extremely successful in telling us about what we observe, but being basically silent on what is. This specific feature is closely related to the probabilistic interpretation of the statevector, combined with the completeness assumption of the theory. Notice that what is under discussion is the probabilistic interpretation, not the probabilistic character, of the theory. Also collapse theories have a fundamentally stochastic character, but, due to their most specific feature, i.e., that of driving the statevector of any individual physical system into appropriate and physically meaningful manifolds, they allow for a different interpretation. One could even say (if one wants to avoid that they too, as the standard theory, speak only of what we find) that they require a different interpretation, one that accounts for our perceptions at the appropriate, i.e., macroscopic, level.
We must admit that this opinion is not universally shared. According to various authors, the ‘rules of the game’ embodied in the precise formulation of the GRW and CSL theories represent all there is to say about them. However, this cannot be the whole story: stricter and more precise requirements than the purely formal ones must be imposed for a theory to be taken seriously as a fundamental description of natural processes (an opinion shared by J. Bell). This request of going beyond the purely formal aspects of a theoretical scheme has been denoted as (the necessity of specifying) the Primitive Ontology (PO) of the theory in an extremely interesting recent paper (Allori et al. 2008). The fundamental requisite of the PO is that it should make absolutely precise what the theory is fundamentally about.
This is not a new problem; as already mentioned it has been raised by J. Bell since his first presentation of the GRW theory. Let me summarize the terms of the debate. Given that the wavefunction of a many-particle system lives in a (high-dimensional) configuration space, which is not endowed with a direct physical meaning connected to our experience of the world around us, Bell wanted to identify the ‘local beables’ of the theory, the quantities on which one could base a description of the perceived reality in ordinary three-dimensional space. In the specific context of QMSL, he (Bell 1987 p. 45) suggested that the ‘GRW jumps’, which we called ‘hittings’, could play this role. In fact they occur at precise times in precise positions of the three-dimensional space. As suggested in (Allori et al. 2008) we will denote this position concerning the PO of the GRW theory as the ‘flashes ontology.’
However, later, Bell himself suggested that the most natural interpretation of the wavefunction in the context of a collapse theory would be that it describes the ‘density […] of stuff’ in the 3N-dimensional configuration space (Bell 1990, p. 30), the natural mathematical framework for describing a system of \(N\) particles. Allori et al. (2008) appropriately have pointed out that this position amounts to avoiding commitment about the PO ontology of the theory and, consequently, to leaving vague the precise and meaningful connections it permits to be established between the mathematical description of the unfolding of physical processes and our perception of them.
The interpretation which, in the opinion of the present writer, is most appropriate for collapse theories, has been proposed in (Ghirardi, Grassi and Benatti 1995) and has been referred in Allori et al. 2008 as ‘the mass density ontology’. Let us briefly describe it.
First of all, various investigations (Pearle and Squires 1994) had made clear that QMSL and CSL needed a modification, i.e., the characteristic localization frequency of the elementary constituents of matter had to be made proportional to the mass characterizing the particle under consideration. In particular, the original frequency for the hitting processes \(f = 10^{-16}\) sec\(^{-1}\) is the one characterizing the nucleons, while, e.g., electrons would suffer hittings with a frequency reduced by about 2000 times. Unfortunately we have no space to discuss here the physical reasons which make this choice appropriate; we refer the reader to the above paper, as well as to the recent detailed analysis by Peruzzi and Rimini (2000). With this modification, what the nonlinear dynamics strives to make ‘objectively definite’ is the mass distribution in the whole universe. Second, a deep critical reconsideration (Ghirardi, Grassi, and Benatti 1995) has made evident how the concept of ‘distance’ that characterizes the Hilbert space is inappropriate in accounting for the similarity or difference between macroscopic situations. Just to give a convincing example, consider three states \(\ket{h} , \ket{h^*}\) and \(\ket{t}\) of a macrosystem (let us say a massive macroscopic bulk of matter), the first corresponding to its being located here, the second to its having the same location but one of its atoms (or molecules) being in a state orthogonal to the corresponding state in \(\ket{h}\), and the third having exactly the same internal state of the first but being differently located (there). Then, despite the fact that the first two states are indistinguishable from each other at the macrolevel, while the first and the third correspond to completely different and directly perceivable situations, the Hilbert space distance between \(\ket{h}\) and \(\ket{h^*}\), is equal to that between \(\ket{h}\) and \(\ket{t}\).
When the localization frequency is related to the mass of the constituents, then, in completely generality (i.e., even when one is dealing with a body which is not almost rigid, such as a gas or a cloud), the mechanism leading to the suppression of the superpositions of macroscopically different states is fundamentally governed by the the integral of the squared differences of the mass densities associated to the two superposed states. Actually, in the original paper the mass density at a point was identified with its average over the characteristic volume of the theory, i.e., \(10^{-15}\) cm\(^3\) around that point. It is however easy to convince oneself that there is no need to do so and that the mass density at any point, directly identified by the statevector (see below), is the appropriate quantity on which to base an appropriate ontology. Accordingly, we take the following attitude: what the theory is about, what is real ‘out there’ at a given space point \(\boldsymbol{x}\), is just a field, i.e., a variable \(m(\mathbf{x},t)\) given by the expectation value of the mass density operator \(M(\boldsymbol{x})\) at \(\boldsymbol{x}\) obtained by multiplying the mass of any kind of particle times the number density operator for the considered type of particle and summing over all possible types of particles which can be present:
\[\begin{align} \tag{7} m(\boldsymbol{x},t) &= \langle F,t \mid M(\boldsymbol{x}) \mid F,t \rangle; \\ M(\boldsymbol{x}) &= {\sum}_{(k)} m_{(k)}a^*_{(k)}(\boldsymbol{x})a_{(k)}(\boldsymbol{x}). \end{align}\]
Here \(\ket{F,t}\) is the statevector characterizing the system at the given time, and \(a^*_{(k)}(\boldsymbol{x})\) and \(a_{(k)}(\boldsymbol{x})\) are the creation and annihilation operators for a particle of type \(k\) at point \(\boldsymbol{x}\). It is obvious that within standard quantum mechanics such a function cannot be endowed with any objective physical meaning due to the occurrence of linear superpositions which give rise to values that do not correspond to what we find in a measurement process or what we perceive. In the case of GRW or CSL theories, if one considers only the states allowed by the dynamics one can give a description of the world in terms of \(m(\boldsymbol{x},t)\), i.e., one recovers a physically meaningful account of physical reality in the usual 3-dimensional space and time. To illustrate this crucial point we consider, first of all, the embarrassing situation of a macroscopic object in the superposition of two differently located position states. We have then simply to recall that in a collapse model relating reductions to mass density differences, the dynamics suppresses in extremely short times the embarrassing superpositions of such states to recover the mass distribution corresponding to our perceptions. Let us come now to a microsystem and let us consider the equal weight superposition of two states \(\ket{h}\) and \(\ket{t}\) describing a microscopic particle in two different locations. Such a state gives rise to a mass distribution corresponding to 1/2 of the mass of the particle in the two considered space regions. This seems, at first sight, to contradict what is revealed by any measurement process. But in such a case we know that the theory implies that the dynamics running all natural processes within GRW ensures that whenever one tries to locate the particle he will always find it in a definite position, e.g., one and only one of the Geiger counters which might be triggered by the passage of the proton will fire, just because a superposition of ‘a counter which has fired’ and ‘one which has not fired’ is dynamically forbidden.
This analysis shows that one can consider at all levels (the micro and the macroscopic ones) the field \(m(\mathbf{x},t)\) as accounting for ‘what is out there’, as originally suggested by Schrödinger with his realistic interpretation of the square of the wave function of a particle as representing the ‘fuzzy’ character of the mass (or charge) of the particle. Obviously, within standard quantum mechanics such a position cannot be maintained because ‘wavepackets diffuse, and with the passage of time become infinitely extended … but however far the wavefunction has extended, the reaction of a detector … remains spotty’, as appropriately remarked in (Bell 1990). As we hope to have made clear, the picture is radically different when one takes into account the new dynamics which succeeds perfectly in reconciling the spread and sharp features of the wavefunction and of the detection process, respectively.
It is also extremely important to stress that, by resorting to the quantity (7) one can define an appropriate ‘distance’ between two states as the integral over the whole 3-dimensional space of the square of the difference of \(m(\boldsymbol{x},t)\) for the two given states, a quantity which turns out to be perfectly appropriate to ground the concept of macroscopically similar or distinguishable Hilbert space states. In turn, this distance can be used as a basis to define a sensible psychophysical correspondence within the theory.
12. The Problem of the Tails of the Wave Function
In recent years, there has been a lively debate around a problem which has its origin, according to some of the authors which have raised it, in the fact that even though the localization process which corresponds to multiplying the wave function times a Gaussian and thus lead to wave functions strongly peaked around the position of the hitting, they allow nevertheless the final wavefuntion to be different from zero over the whole of space. The first criticism of this kind was raised by A. Shimony (1990) and can be summarized by his sentence,
one should not tolerate tails in wave functions which are so broad that their different parts can be discriminated by the senses, even if very low probability amplitude is assigned to them.
After a localization of a macroscopic system, typically the pointer of the apparatus, its centre of mass will be associated to a wave function which is different from zero over the whole space. If one adopts the probabilistic interpretation of the standard theory, this means that even when the measurement process is over, there is a nonzero (even though extremely small) probability of finding its pointer in an arbitrary position, instead of the one corresponding to the registered outcome. This is taken as unacceptable, as indicating that the DRP does not actually overcome the macro-objectification problem.
Let us state immediately that the (alleged) problem arises entirely from keeping the standard interpretation of the wave function unchanged, in particular assuming that its modulus squared gives the probability density of the position variable. However, as we have discussed in the previous section, there are much more serious reasons of principle which require to abandon the probabilistic interpretation and replace it either with the ‘flash ontology’, or with the ‘ mass density ontology’ which we have discussed above.
Before entering into a detailed discussion of this subtle point we need to focus better the problem. We cannot avoid making two remarks. Suppose one adopts, for the moment, the conventional quantum position. We agree that, within such a framework, the fact that wave functions never have strictly compact spatial support can be considered puzzling. However this is an unavoidable problem arising directly from the mathematical features (spreading of wave functions) and from the probabilistic interpretation of the theory, and not at all a problem peculiar to the dynamical reduction models. Indeed, the fact that, e.g., the wave function of the center of mass of a pointer or of a table has not a compact support has never been taken to be a problem for standard quantum mechanics. When, e.g., the center of mass of a table is extremely well peaked around a given point in space, it has always been accepted that it describes a table located at a certain position, and that this corresponds in some way to our perception of it. It is obviously true that, for the given wave function, the quantum rules entail that if a measurement were performed the table could be found (with an extremely small probability) to be kilometers far away, but this is not the measurement or the macro-objectification problem of the standard theory. The latter concerns a completely different situation, i.e., that in which one is confronted with a superposition with comparable weights of two macroscopically separated wave functions, both of which possess tails (i.e., have non-compact support) but are appreciably different from zero only in far-away narrow intervals. This is the really embarrassing situation which conventional quantum mechanics is unable to make understandable. To which perception of the position of the pointer (of the table) does this wave function correspond?
The implications for this problem of the adoption of the QMSL theory should be obvious. Within GRW, the superposition of two states which, when considered individually, are assumed to lead to different and definite perceptions of macroscopic locations, are dynamically forbidden. If some process tends to produce such superpositions, then the reducing dynamics induces the localization of the centre of mass (the associated wave function being appreciably different from zero only in a narrow and precise interval). Correspondingly, the possibility arises of attributing to the system the property of being in a definite place and thus of accounting for our definite perception of it. Summarizing, we stress once more that the criticism about the tails as well as the requirement that the appearance of macroscopically extended (even though extremely small) tails be strictly forbidden is exclusively motivated by uncritically committing oneself to the probabilistic interpretation of the theory, even for what concerns the psycho-physical correspondence: when this position is taken, states assigning non-exactly vanishing probabilities to different outcomes of position measurements should correspond to ambiguous perceptions about these positions. Since neither within the standard formalism nor within the framework of dynamical reduction models a wave function can have compact support, taking such a position leads to conclude that it is just the linear character of the Hilbert space description of physical systems which has to be given up.
It ought to be stressed that there is nothing in the GRW theory which forbids or makes problematic to assume that the localization function has compact support, but it also has to be noted that following this line would be totally useless: since the evolution equation contains the kinetic energy term, any function, even if it has compact support at a given time, will instantaneously spread acquiring a tail extending over the whole of space. If one sticks to the probabilistic interpretation and one accepts the completeness of the description of the states of physical systems in terms of the wave function, the tail problem cannot be avoided.
The solution to the tails problem can only derive from abandoning completely the probabilistic interpretation and from adopting a more physical and realistic interpretation relating ‘what is out there’ to, e.g., the mass density distribution over the whole universe. In this connection, the following example will be instructive. Take a massive sphere of normal density and mass of about 1 kg. Classically, the mass of this body would be totally concentrated within the radius of the sphere, call it \(r\). In QMSL, after the extremely short time interval in which the collapse dynamics leads to a ‘regime’ situation, and if one considers a sphere with radius \(r + 10^{-5}\) cm, the integral of the mass density over the rest of space turns out to be an incredibly small fraction (of the order of 1 over 10 to the power \(10^{15})\) of the mass of a single proton. In such conditions, it seems quite legitimate to claim that the macroscopic body is localised within the sphere.
However, also this quite reasonable conclusion has been questioned and it has been claimed (Lewis 1997), that the very existence of the tails implies that the enumeration principle (i.e., the fact that the claim ‘particle 1 is within this box & particle 2 is within this box & … & particle \(n\) is within this box & no other particle is within this box’ implies the claim ‘there are \(n\) particles within this box’) does not hold, if one takes seriously the mass density interpretation of collapse theories. This paper has given rise to a long debate which would be inappropriate to reproduce here.
We conclude this brief analysis by stressing once more that, in the opinion of the present writer, all the disagreements and the misunderstandings concerning this problem have their origin in the fact that the idea that the probabilistic interpretation of the wave function must be abandoned has not been fully accepted by the authors who find some difficulties in the proposed mass density interpretation of the Collapse Theories. For a recent reconsideration of the problem we refer the reader to the paper by Lewis (2003).
13. The Status of Collapse Models and Recent Positions about them
We recall that, as stated in Section 3, the macro-objectification problem has been at the centre of the most lively and most challenging debate originated by the quantum view of natural processes. According to the majority of those who adhere to the orthodox position such a problem does not deserve a particular attention: classical concepts are a logical prerequisite for the very formulation of quantum mechanics and, consequently, the measurement process itself, the dividing line between the quantum and the classical world, cannot and must not be investigated, but simply accepted. This position has been lucidly summarized by J. Bell himself (1981):
Making a virtue of necessity and influenced by positivistic and instrumentalist philosophies, many came to hold not only that it is difficult to find a coherent picture but that it is wrong to look for one—if not actually immoral then certainly unprofessional
The situation has seen many changes in the course of time, and the necessity of making a clear distinction between what is quantum and what is classical has given rise to many proposals for ‘easy solutions’ to the problem which are based on the possibility, for all practical purposes (FAPP), of locating the splitting between these two faces of reality at different levels.
Then came Bohmian mechanics, a theory which has made clear, in a lucid and perfectly consistent way, that there is no reason of principle requiring a dichotomic description of the world. A universal dynamical principle runs all physical processes and even though ‘it completely agrees with standard quantum predictions’ it implies wave-packet reduction in micro-macro interactions and the classical behaviour of classical objects.
As we have mentioned, the other consistent proposal, at the nonrelativistic level, of a conceptually satisfactory solution of the macro-objectification problem is represented by the Collapse Theories which are the subject of these pages. Contrary to bohmian mechanics, they are rival theory of quantum mechanics, since they make different predictions (even though quite difficult to put into evidence) concerning various physical processes.
Let us now analyze other recent critical positions concerning the two just mentioned approaches (in what follows I will take advantage of the nice analysis of a paper which I have been asked to referee and of which I do not know the author). Various physicists have criticized Bohm approach on the basis that, being empirically indistinguishable from quantum mechanics, such an approach is an example of ‘bad science’ or of ‘a degenerate research program’. Useless to say, I do not consider such criticisms as appropriate; the conceptual advantages and the internal consistency of the approach render it an extremely appealing theoretical scheme (incidentally, one should not forget that it has been just the critical investigations on such a theory which have led Bell to derive his famous and conceptually extremely relevant inequality). On the contrary, I am fully convinced that to consider as acceptable a theory like the standard one, which is incapable of accounting for the way in which it assumes the measurement apparatuses to work, and to deal with them introduces a postulate which plainly contradicts the other assumption of the theory, is not a scientifically tenable position.
This being the situation, one would think that theories like the GRW model would be exempt from an analogous charge, since they actually are (in principle) empirically different from the standard theory. For instance they disagree from such a theory since they forbid the occurrence of macroscopic massive entangled states. In spite of this, they have been the object of an analogous attack by the adherents to the ‘new orthodoxy’ (Bub 1997; Joos et al. 1996; Zurek, 1993) pointing out that environmental induced decoherence shows that, FAPP, collapse theories are simply phenomenological accounts of the reduced state to which one has to resort since one has no control of the degrees of freedom of the environment. When one takes such a position, one is claiming that, essentially, GRW cannot be taken as a fundamental description of nature, mainly because it suffers from the limitation of being empirically indistinguishable from the standard theory, provided such a theory is correctly applied taking into account the actual physical situation. Also in this case, and even at the level at which such an analysis is performed, the practical indistinguishability from the standard approach should not be regarded as a sufficient reason to not take seriously collapse models. In fact, there are many very well known and compelling reasons (see, e.g., Bassi and Ghirardi 2000; Adler 2003) to prefer a logically consistent unified theory to one which makes sense only due to the alleged practical impossibility of detecting the superpositions of macroscopically distinguishable states. At any rate, in principle, such theories can be tested against the standard one and it seems that such a challenge is already under investigation. .
But this is not the whole story. Another criticism, aimed to ‘deny’ the potential interest of collapse theories makes reference to the fact that within any such theory the ensuing dynamics for the statistical operator can be considered as the reduced dynamics deriving from a unitary (and, consequently, essentially a standard quantum) dynamics for the states of an enlarged Hilbert space of a composite quantum system \(S+E\) involving, besides the physical system \(S\) of interest, an ancilla \(E\) whose degrees of freedom are completely unaccessible: due to the quantum dynamical semigroup nature of the evolution equation for the statistical operator, any GRW-like model can always be seen as a phenomenological model deriving from a standard quantum evolution on a larger Hilbert space. In this way, the unitary deterministic evolution characterizing quantum mechanics would be fully restored.
Apart from the obvious remark that such a critical attitude completely fails to grasp—and indeed, purposefully ignores—the most important feature of collapse theories, i.e., of dealing with individual quantum systems and not with statistical ensembles and of yielding a perfectly satisfactory description, matching our perceptions concerning individual macroscopic systems, invoking an unaccessible ancilla to account for the nonlinear and stochastic character of GRW-type theories is once more a purely verbal way of avoiding facing the real puzzling aspects of the quantum description of macroscopic systems. This is not the only negative aspect of such a position; any attempt considering legitimate to introduce unaccessible entities in the theory, when one takes into consideration that there are infinitely possible and inequivalent ways of doing this, amounts really to embarking oneself in a ‘degenerate research program’.
Other reasons for ignoring the dynamical reduction program have been put forward recently by the community of scientists involved in the interesting and exciting field of quantum information. We will not spend too much time in analyzing and discussing the new position about the foundational issues which have motivated the elaboration of collapse theories. The crucial fact is that, from this perspective, one takes the theory not to be about something real ‘occurring out there’ in a real word, but simply about information. This point is made extremely explicit in a recent paper (Zeilinger 2005):
information is the most basic notion of quantum mechanics, and it is information about possible measurement results that is represented in the quantum state. Measurement results are nothing more than states of the classical apparatus used by the experimentalist. The quantum system then is nothing other than the consistently constructed referent of the information represented in the quantum state.
It is clear that if one takes such a position almost all motivations to be worried by the measurement problem disappear, and with them the reasons to work out what Bell has denoted as ‘an exact version of quantum mechanics’. The most appropriate reply to this type of criticisms is to recall that J. Bell (1990) has included ‘information’ among the words which must have no place in a formulation with any pretension to physical precision. In particular he has stressed that one cannot even mention information unless one has given a precise answer to the two following questions: Whose information? and Information about what?
A much more serious attitude is to call attention, as many serious authors do, to the fact that since collapse theories represent rival theories with respect to standard quantum mechanics they lead to the identification of experimental situations which would allow, in principle, crucial tests to discriminate between the two. As we have discussed above, presently, fully discriminating tests seem not to be completely out of reach.
14. Summary
We hope to have succeeded in giving a clear picture of the ideas, the implications, the achievements and the problems of the DRP. We conclude by stressing once more our position with respect to the Collapse Theories. Their interest derives entirely from the fact that they have given some hints about a possible way out from the difficulties characterizing standard quantum mechanics, by proving that explicit and precise models can be worked out which agree with all known predictions of the theory and nevertheless allow, on the basis of a universal dynamics governing all natural processes, to overcome in a mathematically clean and precise way the basic problems of the standard theory. In particular, the Collapse Models show how one can work out a theory that makes perfectly legitimate to take a macrorealistic position about natural processes, without contradicting any of the experimentally tested predictions of standard quantum mechanics. Finally, they might give precise hints about where to look in order to put into evidence, experimentally, possible violations of the superposition principle.
• Adler, S., 2003, “Why Decoherence has not Solved the Measurement Problem: A Response to P. W. Anderson”, Stud.Hist.Philos.Mod.Phys., 34: 135.
• Adler, S., 2007, “Lower and Upper Bounds on CSL Parameters from Latent Image Formation and IGM Heating”, Journal of Physics, A40: 2935.
• Adler, S. and Bassi, A., 2007, “Collapse models with non-white noises” Journal of Physics, A41: 395308.
• –––, 2008, “Collapse models with non-white noises II”, Journal of Physics, A40: 15083.
• Adler, S. and Ramazanoglu, F.M., 2007, “Photon emission rate from atomic systems in the CSL model”, Journal of Physics, A40: 13395.
• Aicardi, F., Borsellino, A., Ghirardi, G.C., and Grassi, R., 1991, “Dynamic models for state-vector reduction—Do they ensure that measurements have outcomes?”, Foundations of Physics Letters, 4: 109.
• Albert, D.Z., 1990, “On the Collapse of the Wave Function”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York.
• –––, 1992, Quantum Mechanics and Experience, Harvard University Press, Cambridge, Mass.
• Albert, D.Z. and Vaidman, L., 1989, “On a proposed postulate of state reduction”, Physics Letters, A139: 1.
• Allori, V., Goldstein, S., Tumulka, R., and Zanghi, N., 2008, “On the Common Structure of Bohmian Mechanics and the Ghirardi-Rimini-Weber Theory”, British Journal for the Philosophy of Science, 59: 353–389.
• Arndt, M, Nairz, O., Vos-Adreae, J., van der Zouw, G. and Zeilinger, A., 1999, “Wave-particle duality of C60 molecules”, Nature, 401: 680.
• Bahrami, M., Donadi, S., Ferialdi, L., Bassi, A., Curceanu, C., Di Domenico, A., Hiesmayr, B.C., 2014, “Are collapse models testable with quantum oscillating systems? The case of neutrinos, kaons, chiral molecules”, Nature: Scientific Reports, 3: 1952.
• Bassi, A. and Ferialdi, L., 2009, “Non-Markovian quantum trajectories: An exact result”, Physical Review Letters, 103: 050403.
• –––, 2009, “Non-Markovian dynamics for a free quantum particle subject to spontaneous collapse in space: general solution and main properties”, Physical Review, A 80: 012116.
• Bassi, A., D.-A. Deckert, and Ferialdi, L., 2010, “Breaking quantum linearity: constraints from human perception and cosmological implications ”, Europhysics Letters, 92: 5006.
• Bassi, A. and Ghirardi G.C., 2000, “A general argument against the universal validity of the superposition principle”, Physics Letters, A 275: 373.
• –––, 2001, “Counting marbles: Reply to Clifton and Monton”, British Journal for the Philosophy of Science, 52: 125.
• –––, 2002, “Dynamical reduction models with general Gaussian noises”, Physical Review A, 65: 042114.
• –––, 2003, “Dynamical Reduction Models”, Physics Reports, 379: 257.
• Bassi, A., Ippoliti, E. and Adler, S., 2005, “Relativistic Reduction Dynamics”, Foundations of Physics, 41: 686.
• Bassi, A., Lochan, K., Satin, S., Singh, T.P., and Ulbricht, H., 2013, “Models of Wave-function Collapse, Underlying Theories, and Experimental Tests”, Review of Modern Physics, 85: 47141.
• Bedingham, D., 2011, “Towards Quantum Superpositions of a Mirror: an Exact Open Systems Analysis”, Journal of Physics, A38: 2715.
• Bedingham, D., Duerr, D., Ghirardi, G.C., Goldstein, S., Tumulka, R. and Zanghi, N. 2014, “Matter Density and Relativistic Models of Wave Function Collapse”, Journal of Statistical Physics, 154: 623.
• Bell, J.S., 1981, “Bertlmann’s socks and the nature of reality”, Journal de Physique, Colloque C2, suppl. au numero 3, Tome 42: 41.
• –––, 1986, “Six possible worlds of quantum mechanics”, in Proceedings of the Nobel Symposium 65: Possible Worlds in Arts and Sciences, de Gruyter, New York.
• –––, 1987, “Are there quantum jumps?”, in Schrödinger—Centenary Celebration of a Polymath, C.W. Kilmister (ed.), Cambridge University Press, Cambridge.
• –––, 1989a, “Towards an Exact Quantum mechanics”, in Themes in Contemporary Physics II, S. Deser, R.J. Finkelstein (eds.), World Scientific, Singapore.
• –––, 1989b, “The Trieste Lecture of John Stuart Bell”, Journal of Physics, A40: 2919.
• –––, 1990, “Against ‘measurement’”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York.
• Berndl, K., Duerr, D., Goldstein, S., Zanghi, N., 1996 , “Nonlocality, Lorentz Invariance, and Bohmian Quantum Theory”, Physical Review , A53: 2062.
• Bohm, D., 1952, “A suggested interpretation of the quantum theory in terms of hidden variables. I & II.” Physical Review, 85: 166, ibid., 85: 180.
• Bohm, D. and Bub, J., 1966, “A proposed solution of the measurement problem in quantum mechanics by a hidden variable theory”, Reviews of Modern Physics, 38: 453.
• Born, M., 1971, The Born-Einstein Letters, Walter and Co., New York.
• Brown, H.R., 1986, “The insolubility proof of the quantum measurement problem”, Foundations of Physics, 16: 857.
• Bub, J., 1997, “Interpreting the Quantum World”, Cambridge University Press, Cambridge.
• Busch, P. and Shimony, A., 1996, “Insolubility of the quantum measurement problem for unsharp observables”, Studies in History and Philosophy of Modern Physics, 27B: 397.
• Clifton, R. and Monton, B., 1999a, “Losing your marbles in wavefunction collapse theories”, British Journal for the Philosophy of Science, 50: 697.
• –––, 1999b, “Counting marbles with ‘accessible’ mass density: A reply to Bassi and Ghirardi”, British Journal for the Philosophy of Science, 51: 155.
• Conway, J. and Kochen, S., 2006, “The Free Will Theorem”, to appear in Foundations of Physics . Also quant-phys 0604079 .
• –––, 2006b, “On Adler’s Conway Kochen Twin Argument”, quant-phys 0610147 to appear on Foundations of Physics .
• –––, 2007, “Reply to Comments of Bassi, Ghirardi and Tumulka on the Free Will Theorem”, quant-phys 0701016 to appear on Foundations of Physics.
• Curceanu, C., Hiesmayr, B.C., and Piscicchia, K., 2015, “X-rays help to unfuzzy the concept of measurement”, Journal of Advances in Physics, 4: 263.
• Dawker, F. and Herbauts, I., 2004a, “Simulating Causal Collapse Models”, Classical and Quantum Gravity, 21: 2936.
• –––, 2004b, “A Spontaneous Collapse Model on a Lattice”, Journal of Statistical Physics, 115: 1394.
• Dowker F. and Henson J., 2004, “Spontaneous collapse models on a lattice”, Journal of Statistical Physics, 115: 1327.
• d’Espagnat, B., 1971, “Conceptual Foundations of Quantum Mechanics”, Reading, MA: W.A. Benjamin, .
• Dirac, P.A.M., 1948, Quantum Mechanics, Oxford: Clarendon Press.
• Dewdney, C. and Horton, G., 2001, “A non-local, Lorentz-invariant, hidden-variable interpretation of relativistic quantum mechanics based on particle trajectories”, Journal of Physics A, 34: 9871.
• Diosi, L., 1990, “Relativistic theory for continuous measurement of quantum fields”, Physical Review A, 42: 5086.
• Donadi, S., Bassi, A., Curceanu, C., Di Domenico, A. and Hiesmayr, H., 2013a, “Are Collapse Models Testable via Flavor Oscillations? ” Foundations of Physics, 43: 1066.
• Donadi, S., Bassi, A., Curceanu, C., Ferialdi, L., 2013b, “The effect of spontaneous collapses on neutrino oscillations” Foundations of Physics, 43: 1066.
• Dove, C. and Squires, E.J., 1995 “Symmetric Versions of Explicit Wavefunctions Collapse Models”, Foundations of Physics A, 25: 1267.
• Dürr, D., Goldstein, S., Münch-Berndl, K., Zanghi, N., 1999, “Hypersurface Bohm—Dirac models”, Physical Review, A60: 2729.
• Eberhard, P., 1978, “Bell’s theorem and different concepts of locality”, Nuovo Cimento, 46B: 392.
• Fine, A., 1970, “Insolubility of the quantum measurement problem”, Physical Review, D2: 2783.
• Fonda, L., Ghirardi, G.C., and Rimini A., 1978, “Decay theory of unstable quantum systems”, Reports on Progress in Physics, 41: 587.
• Fu, Q., 1997, “Spontaneous radiation of free electrons in a nonrelativistic collapse model”, Physical Review, A56: 1806.
• Gallis, M.R. and Fleming, G.N., 1990, “Environmental and spontaneous localization”, Physical Review, A42: 38.
• Gerlich, S., Hackermüller, L., Hornberger, K., Stibor, A., Ulbricht, H., Gring, M., Goldfarb, F., Savas, T., Müri, M., Mayor, M and Arndt, M., 2007, “A Kapitza-Dirac-Talbot-Lau interferometer for highly polarizable molecules”, Nature Physics, 3: 711.
• Ghirardi, G.C., 2000, “Local measurements of nonlocal observables and the relativistic reduction process”, Foundations of Physics, 30: 1337.
• –––, 2007, “Some reflections inspired by my research activity in quantum mechanics”, Journal of Physics A, 40: 2891.
• Ghirardi, G.C. and Grassi, R., 1996, “Bohm’s Theory versus Dynamical Reduction”, in Bohmian Mechanics and Quantum Theory: an Appraisal, J. Cushing et al. (eds), Kluwer, Dordrecht.
• Ghirardi, G.C., Grassi, R., and Benatti, F., 1995, “Describing the macroscopic world—Closing the circle within the dynamical reduction program”, Foundations of Physics, 25: 5.
• Ghirardi, G.C., Grassi, R., Butterfield, J., and Fleming, G.N., 1993, “Parameter dependence and outcome dependence in dynamic models for state-vector reduction”, Foundations of Physics, 23: 341.
• Ghirardi, G.C., Grassi, R., and Pearle, P., 1990, “Relativistic dynamic reduction models—General framework and examples”, Foundations of Physics, 20: 1271.
• Ghirardi, G.C., Pearle, P., and Rimini, A., 1990, “Markov-processes in Hilbert-space and continuous spontaneous localization of systems of identical particles”, Physical Review, A42: 78.
• Ghirardi, G.C. and Rimini, A., 1990, “Old and New Ideas in the Theory of Quantum Measurement”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York .
• Ghirardi, G.C., Rimini, A., and Weber, T., 1980, “A general argument against superluminal transmission through the quantum-mechanical measurement process”, Lettere al Nuovo Cimento, 27: 293.
• –––, 1986, “Unified dynamics for microscopic and macroscopic systems”, Physical Review, D34: 470.
• Ghirardi, G.C. and Romano, R., 2014, “Collapse Models and Perceptual Processes”, Journal of Physics: Conference Series, 504: 012022.
• Gisin, N., 1984, “Quantum measurements and stochastic processes”, Physical Review Letters, 52: 1657, and “Reply”, ibid., 53: 1776.
• –––, 1989, “Stochastic quantum dynamics and relativity”, Helvetica Physica Acta, 62: 363.
• Goldstein, S. and Tumulka, R., 2003, “Opposite arrows of time can reconcile relativity and nonlocality”, Classical and Quantum Gravity, 20: 557.
• Goldstein, S., Tausk, D.V., Tumulka, R., and Zanghi, N., 2010, “What does the Free Will Theorem Actually Prove?”, Notice of the American Mathematical Society, 57: 1451.
• Gottfried, K., 2000, “Does Quantum Mechanics Carry the Seeds of its own Destruction?”, in Quantum Reflections, D. Amati et al. (eds), Cambridge University Press, Cambridge.
• Hackermüller, L., Hornberger, K., Brexger, B., Zeilinger, A. and Arndt, M., 2004, “Decoherence of matter waves by thermal emission of radiation”, Nature, 427: 711.
• Jarrett, J.P., 1984, “On the physical significance of the locality conditions in the Bell arguments”, Nous, 18: 569.
• Joos, E., Zeh, H.D., Kiefer, C., Giulini, D., Kupsch, J., and Stamatescu, I.-O., 1996, “Decoherence and the Appearance of a Classical World”, Springer, Berlin.
• Lewis, P., 1997, “Quantum mechanics, orthogonality and counting”, British Journal for the Philosophy of Science, 48: 313.
• –––, 2003, “Four strategies for dealing with the counting anomaly in spontaneous collapse theories of quantum mechanics”, International Studies in the Philosophy of Science, 17: 137.
• Marshall, W., Simon, C., Penrose, G. and Bouwmeester, D., 2003, “Towards quantum superpositions of a mirror”, Physical Review Letters, 91: 130401.
• Maudlin, T., 2011, Quantum Non-Locality and Relativity Wiley-Blackwell.
• Nicrosini, O. and Rimini, A., 2003, “Relativistic spontaneous localization: a proposal”, Foundations of Physics, 33: 1061.
• Pais, A., 1982, Subtle is the Lord, Oxford University Press, Oxford.
• Pearle, P., 1976, “Reduction of statevector by a nonlinear Schrödinger equation”, Physical Review, D13: 857.
• –––, 1979, “Toward explaining why events occur”, International Journal of Theoretical Physics, 18: 489 .
• –––, 1989, “Combining stochastic dynamical state-vector reduction with spontaneous localization”, Physical Review, A39: 2277.
• –––, 1990, “Toward a Relativistic Theory of Statevector Reduction”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York.
• –––, 1999, “Collapse Models”, in Open Systems and measurement in Relativistic Quantum Theory, H.P. Breuer and F. Petruccione (eds.), Springer, Berlin.
• –––, 1999b, “Relativistic Collapse Model With Tachyonic Features”, Physical Review, A59: 80.
• Pearle, P. and Squires, E., 1994, “Bound-state excitation, nucleon decay experiments, and models of wave-function collapse”, Physical Review Letters, 73: 1.
• Penrose, R., 1989, The Emperor’s New Mind, Oxford University Press, Oxford.
• Peruzzi, G. and Rimini, A., 2000, “Compoundation invariance and Bohmian mechanics”, Foundations of Physics, 30: 1445.
• Rae, A.I.M., 1990, “Can GRW theory be tested by experiments on SQUIDs?”, Journal of Physics, A23: 57.
• Rimini, A., 1995, “Spontaneous Localization and Superconductivity”, in Advances in Quantum Phenomena, E. Beltrametti et al. (eds.), Plenum, New York.
• Schrödinger, E., 1935, “Die gegenwärtige Situation in der Quantenmechanik”, Naturwissenschaften, 23: 807.
• Schilpp, P.A. (ed.), 1949, Albert Einstein: Philosopher-Scientist, Tudor, New York.
• Shimony, A., 1974, “Approximate measurement in quantum-mechanics. 2”, Physical Review, D9: 2321.
• –––, 1983, “Controllable and uncontrollable non-locality”, in Proceedings of the International Symposium on the Foundations of Quantum Mechanics, S. Kamefuchi et al. (eds), Physical Society of Japan, Tokyo.
• –––, 1989, “Search for a worldview which can accommodate our knowledge of microphysics”, in Philosophical Consequences of Quantum Theory, J.T. Cushing and E. McMullin (eds), University of Notre Dame Press, Notre Dame, Indiana.
• –––, 1990, “Desiderata for modified quantum dynamics”, in PSA 1990, Volume 2, A. Fine, M. Forbes and L. Wessels (eds), Philosophy of Science Association, East Lansing, Michigan.
• Squires, E., 1991, “Wave-function collapse and ultraviolet photons”, Physics Letters, A 158: 431.
• Stapp, H.P., 1989, “Quantum nonlocality and the description of nature”, in Philosophical Consequences of Quantum Theory, J.T. Cushing and E. McMullin (eds), University of Notre Dame Press, Notre Dame, Indiana.
• Suppes, P. and Zanotti, M., 1976, “On the determinism of hidden variables theories with strict correlation and conditional statistical independence of observables”, in Logic and Probability in Quantum Mechanics, P. Suppes (ed.), Reidel, Dordrecht.
• Tumulka, R., 2006a, “A Relativistic Version of the Ghirardi-Rimini-Weber Model”, Journal of Statistical Physics, 125: 821.
• –––, 2006b, “On Spontaneous Wave Function Collapse and Quantum Field Theory”, Proceedings of the Royal Society, London, A462: 1897.
• –––, 2006c, “Collapse and Relativity”, in Quantum Mechanics: Are there Quantum Jumps? and On the Present Status of Quantum Mechanics, A. Bassi, D. Dürr, T. Weber and N. Zanghi (eds), AIP Conference Proceedings 844, American Institute of Physics
• –––, 2007, “Comment on The Free Will Theorem, to appear in Foundations of Physics. Also quant-phys 0611283 .
• van Fraassen, B., 1982, “The Charybdis of Realism: Epistemological Implications of Bell’s Inequality”, Synthese, 52: 25.
• Zeinlinger, A., 2005, “The message of the quantum”, Nature, 438: 743.
• Zurek, W.H., 1993, “Decoherence—A reply to comments”, Physics Today, 46: 81.
Other Internet Resources
Copyright © 2016 by
Giancarlo Ghirardi <>
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free |
1cbc9a371cb8d6cb | CONSIDER a verbal description of the effect of gravity: drop a ball, and it will fall.
That is a true enough fact, but fuzzy in the way that frustrates scientists. How fast does the ball fall? Does it fall at constant rate, or accelerate? Would a heavier ball fall faster? More words, more sentences could provide details, swelling into an unwieldy yet still incomplete paragraph.
The wonder of mathematics is that it captures precisely in a few symbols what can only be described clumsily with many words. Those symbols, strung together in meaningful order, make equations -- which in turn constitute the world's most concise and reliable body of knowledge. And so it is that physics offers a very simple equation for calculating the speed of a falling ball.
Readers of Physics World magazine recently were asked an interesting question: Which equations are the greatest?
Dr. Robert P. Crease, a professor of philosophy at the State University of New York at Stony Brook and a historian at Brookhaven National Laboratory, posed the question in his Critical Point column and received 120 responses, nominating 50 different equations. Some were nominated for the sheer beauty of their simplicity, some for the breadth of knowledge they capture, others for historical importance. In general, Dr. Crease said, a great equation "reshapes perception of the universe."
The mathematical equation providing the speed of a falling ball is just four symbols long: v = gt.
With it, you can calculate the ball's speed 2.5 seconds after release. (That's g, the acceleration of gravity, which is 32 feet per second squared, multiplied by 2.5 seconds, giving an answer of 80 feet per second.)
Continue reading the main story
This equation, a mainstay of high school physics, was not among those nominated as the greatest of all time, which is not surprising, because its use is limited.
The pull of gravity varies with distance from the Earth's surface, and the equation also suggests that an object's speed could go on increasing toward infinity, past the known limit of the speed of light.
The top vote-getters in the magazine poll were Maxwell's equations -- a set of four that describe the interplay between electric and magnetic fields -- and Euler's equation, a purely mathematical construct that finds wide use in theoretical physics.
"It combines rational and irrational numbers to get zero," Dr. Crease said. "It's bizarre."
Among the other nominees were the all-familiar E=mc2 from Einstein, which equates energy and matter; the Pythagorean theorem; and Isaac Newton's F=ma.
Prominent scientists have their own favorites. Dr. Brian Greene, a theorist at Columbia University and author of "The Elegant Universe," cites Einstein's general relativity equations, which describe how matter warps the fabric of space, and the Schrödinger equation, the fundamental equation of quantum mechanics.
"With a mere handful of symbols, those equations describe almost all phenomena in the universe," he said. "It is so amazing how so much of the universe is encapsulated in a few symbols."
Dr. Neil deGrasse Tyson, director of the Hayden Planetarium, said he was disappointed that E=mc2 did not receive more votes. "I think the general physics community, they're a little bored with the equation," he said. "It's risen to the level of icon that people no longer pay attention to."
But Dr. Tyson said that the equation was a fundamental underpinning not only of the universe, but also of the first five chapters of his book "Origins."
"It's simple, yet profound," he said. "I'd be less impressed if it were a big complicated equation."
A half-dozen of Dr. Crease's respondents, including Richard Harrison of Calgary, Alberta, chose one of the simplest possible equations.
Mr. Harrison wrote: "'1 + 1 = 2' is the fairy tale of mathematics, the first equation I taught my son, the first expression of the miraculous power of the mind to change the real world. I remember my son holding up the index finger, the 'one finger,' of each hand as he learned the expression, and the moment of wonder, perhaps his first of true philosophical wonder, when he saw that the two fingers, separated by his whole body, could be joined in a single concept in his mind."
Continue reading the main story |
44ce52f7570454bf | Why did Nature Invent Spin?
13 thoughts on “Why did Nature Invent Spin?
1. There are papers on this sort of thing, Alexander, but they struggle to get into journals, and then they struggle to get any publicity. See for example http://www.cybsoc.org/electron.pdf and look at the picture on page 6. Note the dark line. It’s essentially the same as Qiu-Hong Hu’s helix at http://arxiv.org/abs/physics/0512265. Also look at gamma-gamma pair production, electron diffraction, the Einstein-de Haas effect, magnetic moment, the wiki atomic orbitals article where you can read that “electrons exist as standing waves”, and of course annihilation. The electron is a 511keV photon perpetually displacing its own path into a closed Dirac’s-belt path. This only works at 511keV because that wavelength “fits” with h. See the spindle-sphere torus at http://www.antiprism.com/album/860_tori/imagelist.html and try to imagine it without a surface. It has as much surface as a subterranean seismic wave. The electron’s “intrinsic” spin is something like the intrinsic spin of a cyclone. Take that away using an anticyclone, and all you’ve got is wind. Take the electron’s spin away with a positron, and all you’ve got is light.
A medical doctor called Andrew Worsley told me about something else that looks interesting: Planck length is l=√(ћG/c³). Replace √(ћG) with 4πn where n is a suitable value with the correct dimensionality. You’ve still got your Planck length. But now set n to 1, and work out 4πn/√(c³). There’s a binding energy adjustment, but it’s small, like 2.002319 compared to 2. Look at the Watt balance section of the Wikipedia kilogram article. That refers to g rather than G, but if you can define the kilogram using h and c and not much else, surely you can do the same for the mass of the electron. Photon momentum is resistance to change-in-motion for a wave propagating linearly. Electron mass is resistance to change-in-motion for a wave going round and round. See http://www.tardyon.de/mirror/hooft/hooft.htm but note that the ‘t Hooft here is not the Nobel ‘t Hooft.
2. Rotations in 3D in our theory are not physical rotations, but recalculation formulas from one reference frame to another one oriented differently. That is why, for example there is no the angular velocity of rotations in such formulas. And in 3D space there are only $latex 2 \pi$ different angles (reference frames), speacking figuratively.
3. Lol, this is so wrong I don’t know where to begin – what a target-rich environment.
1) You forgot to include the potential energy term in Schrodinger’s equation. Without it you will have no prediction for the hydrogen atom.
2) You can’t derive Schrodinger’s equation from the relationship between energy and momentum. At best, it acts as a motivation.
3) The ribbon twisting of 720 degrees doesn’t have any connection to the fact that you need 720 degrees rotation for an electron to get back to it’s original state. It’s just a coincidence that SU(2) is the universal cover of SO(3).
4) Spin is a quantum mechanical property, period. You simply can’t have spin, i.e. an intrinsic angular momentum without actually moving parts in classical physics. For example, trying to ascribe electron’s spin to something like axial rotation will immediately lead to faster than light speeds.
Apart from these glaring mistakes this post is a collection of inchoate sentences and charming ignorance. Rather than dissing particle physicists make an honest-to-god effort to understand what they have accomplished. You’ll never be able to repeat what they did but at least you’ll be able to appreciate what a supreme edifice to human intellect particle physics is.
• Would be nice if you attempted to answer Ray’s legitimate questions, Alexander Unzicker. Why, for example, do you show the Schrödinger equation (SE) for a free particle when writing about the hydrogen atom? You claim that the non-relativistic SE had made successful predictions for the hydrogen atom (H) without explicitly reference to the features of the electron. Hold on!
Even in the free SE shown by you, a feature of the electron is clearly seen on the left side: its mass. The most simple SE for H has only a term for the Coulomb potential; there’s no spin. Nevertheless, the mass and charges of the electron and the proton need to be entered for concrete predictions. Four values falling from heavens. Historically the simple SE for H soon turned out to not sufficiently describe the H. For example, a term for the spin needed to be added, another feature of the electron not being an intrinsic part of the SE, but added manually (reminds me of adding another epicycle :) That disappointment was a strong motivation to look for a more general description.
By the way, although the general formulas for the muonic hydrogen are similar, actual solutions are different and a muonic H is easily distinguished from a “normal” one.
4. Since the end of the 19th century, there have been many hard-to-drill problems encountered in physics that have often been bypassed in favour of making rapid progress in other areas. These range from failed attempts to create a viable electromagnetic worldview, failed attempts to determine a finite extended structure for the electron, interpretation of Planck’s Blackbody radiation equation, and wave-particle dualism to infinity removal methods in QED. The concept of ‘spin’ has also been a controversial matter.
Mathematical notation has been one of the controversial areas and some physicists such as Heisenberg have even advocated abandoning connecting of the mathematics to physical concepts. There is also a methodology in mathematical physics that is blind to the physics and has only the achievement of a known value as its goal. Dirac leaned more towards the mathematical side in his formulation of relativistic QM.
“I must say that I am very dissatisfied with the situation, because this so called good theory does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small – not neglecting it just because it is infinitely great and you do not want it!” (Dirac, On Quantum Mechanics and Mathematics, 1937)http://www.spaceandmotion.com/physics-paul-dirac.htm
There were disputes regarding the use of quaternions vs. vectors in EM theory and general physics that have resurfaced in Quantum Mechanics (see Mendel Sachs and David Hestenes).
It should be noted that a notation system can hamper efforts at understanding if it doesn’t fully reflect or make explicit the underlying physical reality.
“The physicist cannot simply surrender to the philosopher the critical contemplation of the theoretical foundations; for he himself knows best and feels most surely where the shoe pinches…. he must try to make clear in his own mind just how far the concepts which he uses are justified… The whole of science is nothing more than a refinement of everyday thinking.” – Albert Einstein
I would add that physicists should not surrender critical contemplation of the theoretical foundations to mathematicians either, even if the mathematicians obtain equations that work extremely well. E.g. QED, Matrix Mechanics, String Theory,
So here is something interesting from Dr. David Hestenes regarding the employment of mathematics in physics:
“My purpose is to lay bare some serious misconceptions that complicate quantum mechanics and obscure its relation to classical mechanics. The most basic of these misconceptions is that the Pauli matrices are intrinsically related to spin. On the contrary, I claim that their physical significance is derived solely from their correspondence with orthogonal directions in space. The representation of σi by 2×2 matrices is irrelevant to physics.” – Dr. David Hestenes,
• This sounds…
… like Noether’s Theorem for dummies. Please learn to understand physics,
5. I came up with the idea of a rotating wave for an electron back in 1987 while finishing a grad project in architecture and studying physics which I did in my undergrad. Architecture problem solving taught me how to step outside the box and that does not mean disregarding math and laws of physics. It means looking down a different road and wondering.
Last year I got back into it ernestly and finally properly derived the Gkl using the affine connection. The electron wave traces a helical path in constant forward motion. The Rotor position vector is the eigenvector. Gravity slows down the rotation and the helical path spirals outward. The cylinder becomes a flute with intrinsic curvature.
So now I am relearning the Weyl, Dirac, Schrodinger, Pauli equations and comparing with the wavefunction of the Rotating Wave. It is all such beautiful poetry. As a young boy I learned how to hoe the vineyard and that led me to wonder what is gravity. The soil is fertile and it needs be turned.
Google Christie Wavicle to see the model. Disregard the attempt at the long derivation of Gkl. It has been corrected by the more recent and simple derivation just last year.
My email address is billchristiearchitect@gmail.com
Bill Christie
Leave a Reply to Christian Gapp Cancel reply
|
25953c1cf768d0d9 | HYLE--International Journal for Philosophy of Chemistry, Vol. 11, No.2 (2005), pp. 101-126.
Copyright © 2005 by HYLE and Valentin N. Ostrovsky
HYLE Article
Towards a Philosophy of Approximations in the ‘Exact’ Sciences
Valentin N. Ostrovsky*
Abstract: The issue of approximations is mostly neglected in the philosophy of science, and sometimes misinterpreted. The paper demonstrates that approximations are in fact in the core of some recent discussions in the philosophy of chemistry: on the shape of molecules, the Born-Oppenheimer approximation, the role of orbitals, and the physical explanation of the Periodic Table of Elements. The ontological and epistemological significance of approximations in the exact sciences is analyzed. The crucial role of approximations in generating qualitative images and comprehensible models is emphasized. A complementarity relation between numerically ‘exact’ theories and explanatory approximate approaches is claimed.
Keywords: Approximations in quantum chemistry, complementarity, shape of molecules, orbitals, Born-Oppenheimer approximation, Periodic Table.
1. Introduction
The issue of approximations appears, explictly or implicitly, in many discussions in the philosophy of chemistry. Do molecules have a shape? Can orbitals be observed in experiments? Is the physical explanation of the Periodic Table of Elements really an explanation? All these subjects involve an analysis of the role of approximations.
For instance, Garcia-Sucre & Bunge (1981) argued, "the Born-Oppenheimer approximation, although an artifact, does represent some important objective properties of quantum-mechanical systems, among them their geometry." How is that possible – an artifact representing some important objective properties? Is it by accident? Is this situation peculiar to the Born-Oppenheimer approximation? Could it be clarified or even remedied by a change of terminology, as suggested recently by Del Re 2003 (note 11)?
We write "theorem" instead of "approximation" because the latter name has misled some researchers into believing that the Born-Oppenheimer study has no physical content: actually, it is the proof that quantum mechanics is compatible with the separation of nuclear motion from electronic motions as revealed by observed molecular spectra; and novelties are only found when two hypersurfaces cross.
We meet a similar situation in the recent discussion on the status and observability of orbitals. According to Scerri (2001),
Of course, the orbital model remains enormously useful as an approximation and lies in the heart of much of quantum chemistry but it is just that – a model, without physical significance, as all computational chemists and physicists are aware.
Is this again by chance – a model without physical significance lying in the heart of quantum chemistry? Moreover, Scerri (2001) explains the experimentally obtained images of orbitals by Zuo et al. 1999, "I suggest that any similarities between the reported images and textbook orbitals may be completely coincidental". Is all that not too much coincidence for the ‘exact’ sciences?
The examples illustrate that the discussions are not about some marginal technical details but about the very heart of quantum chemistry. Therefore, the meaning and significance of approximations in science deserve a deeper analysis from ontological and epistemological perspectives than it received before.[1] Are approximations necessary or can they be avoided in order to make a science really exact? Are they arbitrary and subjective (artifacts)? How can they be linked to something observable? These and other related issues are analyzed in this paper by further developing aspects of a previous paper (Ostrovsky 2001).
2. Approximations in physics: an insider’s view
Although physics is considered an exact science, any practicing physicist knows that everything in physics is approximate. The prominent theoretical physicist A.B. Migdal starts his Qualitative Methods in Quantum Theory (1989) as follows:
No problem in physics can ever be solved exactly. We always have to neglect the effect of various factors which are unimportant for the particular phenomenon we have in mind. It then becomes important to be able to estimate the magnitude of the quantities we have neglected. Moreover, before calculating a result numerically it is often necessary to investigate the phenomenon qualitatively, that is, to estimate the order of magnitude of the quantities we are interested in and to find out as much as possible about the general behavior of the solution.
For the last dozen years theoretical physics has undergone strong changes. Under the influence of the theory, new fields of mathematics started being used and developed by theorists. Computational theoretical physics acquired a particular importance. Nevertheless, despite mathematization of physics, qualitative methods became even more important than before elements of the theory. They are sort of mathematical analog of the image-bearing mentality of sculptors and poets, feeding the intuition.
I believe that now more than before, a beginning theoretician should master qualitative methods of reasoning.
This is not some marginal opinion, but an authoritative judgment of an outstanding professional. The English edition of Migdal’s book was printed by one of the most authoritative publishing houses in the exact sciences, Addison-Wesley. In 1989 and 2000 the book reappeared in the series Advanced Book Classics. Its author, Professor A.B. Migdal, was a full member of the USSR Academy of Science, member of L. D. Landau Institute of Theoretical Physics and a Landau Prize Laureate. The book was translated from Russian by Anthony J. Leggett who became in 2003 the recipient of the Nobel Prize in physics. Migdal’s views are universally accepted by the community of physicists.
Laypeople might be confused by such a statement as: ‘In quantum mechanics one can obtain an exact solution only for the hydrogen atom, but not for a multi-electron atom.’ In fact, such formulations contain implicit assumptions that are shared by specialists. In this particular example, it means that an exact solution is obtainable for the non-relativistic Schrödinger equation of the hydrogen atom. But the Schrödinger equation itself is an approximation: it does not account for relativistic effects. Strictly speaking, there is not such an object in nature[2] as a non-relativistic Schrödinger atom. It is a model, or an approximation, that allows calculating results that match only approximately experimental data.[3] An exact solution for the hydrogen atom can also be obtained from the Dirac equation that takes the relativity theory into account. It ensures a better agreement with experiment (for instance, by describing the fine structure of the energy levels), but again, it is an approximation. One need to take into account the size and structure of the atomic nuclei to improve the results. The Dirac equation does not account for the atomic interaction with the electromagnetic field. If one decides to go further and achieve higher accuracy (e.g., to describe the Lamb shift of levels), one has to turn to quantum electrodynamics. Even the latter theory does not provide an ‘exact’ equation to be solved. It only allows calculating properties of atoms and ions at some order of approximation over small parameters that characterize relativistic effects.
Thus, physics is nothing else than a hierarchy of approximations, without a single exact equation or result. This is not a pitiful temporary drawback that might be removed in the course of time. It will continue forever, since it reflects the essence of the approach of physics to describing nature. First of all, the laws of physics are not given a priori, but are always experimentally tested with only some precision. Second, there are some inevitable approximations. Any researcher must select a piece of the universe to be studied and described (for instance, an atom, or a planetary system) and, by approximation, must neglect the rest of the world. Only some cosmological theories claim to avoid these limitations, but, of course, they contain an immense number of other approximations. Third, even if some more exact theory is known, it still makes a deep sense to resort to approximations, not only for pragmatic reasons, but also for epistemological reasons. Approximations immensely enrich our qualitative picture of nature. This aspect will be further discussed in Section 4, but it is worthwhile to indicate here that the basic models of chemistry (such as molecular shape, see Section 3.2) are not universal, but arise from appropriate approximations.
Apparently the laws of conservation in physics have a somewhat special status. Some of them, initially considered strict, later proved to be only approximate, as the parity conservation. The most important and widely known one is the energy conservation law that seems to remain unshaken. However, this law has a special character as emphasized by Feynman (1992). As our knowledge of nature expands, new forms of energy are embraced by the law to obtain the total energy that is conserved. Thus, energy conservation actually means that up to now we have always managed to find new terms to be added to keep the total energy constant. This availability is, of course, a deeply rooted principle of nature.
People interested in really exact results and statements should turn to mathematics rather than to physics. Mathematics is not a natural science, albeit widely applied in the natural sciences. Mathematics works with abstract constructions that should be internally consistent, i.e. without logical contradictions. No other restrictions are imposed. Mathematicians construct a logically non-contradictory ‘universe’ and work with it. They need not care if this universe is the one we live in. For instance, a mathematician is ready to consider a space of arbitrary dimensionality n. While this is extremely useful as a mathematical technique, a physicist is faced with fact that we live in a space with n=3, with all its peculiarities. Physicists cannot construct their universe; they have to study the only one available.
Thus, no physical theory can be blamed for using approximations because, in fact, all theories do that. The only question is how the approximation in a specific theory or a specific application is justified. To develop a proper approximation and to be aware of the limits of its applicability is an important element of defining the qualification of a physicist. This skill cannot be put in the form of an algorithm, which is one of the reasons why a physicist cannot be replaced by a computer. The great chain of approximations bears deep epistemological meaning that is frequently unrecognized or underestimated.
3. Approximations in physics: a view from the outside
Some nonphysicists seem to have radically different ideas about approximations. They adhere to an image of the ideal and immaculate exact science that does not resort to approximations. Since real science does not fit the ideal image but widely employs all kinds of approximations, some of its approaches and results are looked upon with skepticism, suspicion, and distrust.
The issue of approximations is important for chemistry. It is in the center of many philosophical discussions in chemistry, as mentioned in the Introduction, so that a proper philosophical understanding of approximations is particularly important here. Below we at first discuss some specific, albeit vitally important, approximations.
3.1. Born-Oppenheimer approximation
An issue much discussed in the recent literature is the problem of molecular shape. In quantum mechanics a multi-particle system generally does not possess such a property as a definite shape. However, a shape might be ascribed to a molecule within the Born-Oppenheimer approximation. The latter is instrumental in the quantum theory of molecules and therefore plays a very important role in quantum chemistry. In particular, chemical reactions that are not accompanied by a change of the electronic states are described within this approximation. Some authors exhibit deep dissatisfaction about the facts that chemistry is actually based upon approximations (and hence that more general theories exist) and that molecular shape is not an absolute but transient property with a limited domain of applicability.
Garcia-Sucre and Bunge (1981) call the Born-Oppenheimer approximation an artifact. They do not elucidate the meaning of this term, but the context suggests that an artifact is something human-made and unrelated to nature. However, ‘human-made’ is not alien to science, nor does it mean unrelated to nature. Take, for instance, the ‘exact’ Schrödinger equation basic to non-relativistic quantum mechanics. It was suggested by Schrödinger, not by nature. It was intensively used by other human beings. Nature does not solve Schrödinger equation; it does not know anything about the wave function. Instead, it seems that nature acts like an old-fashioned analogous computer, without resort to digitization. All science was created by humans in a pursuit to describe and understand nature. In this sense all science is an artifact.
Because the term ‘artifact’ is applied also to the material objects produced by humans, we may distinguish science and similar products by saying that they are ideal artifacts. Is there any principal difference between the two ideal artifacts of an ‘exact’ wave function and its approximate Born-Oppenheimer version? As discussed above, an exact solution of the Schrödinger equation describes something non-existent, as some philosophers would say. Actually this terminology is misleading, since in fact such an ‘exact’ wave function provides a good, physically justified approximation. However, the same might be said about wave functions obtained within the Born-Oppenheimer approximation.
Quantum chemists use the Schrödinger equation in the domain where it is appropriate, although this equation is not exact (since it does not include relativistic effects) and even not the most accurate known. Some researchers go beyond the Schrödinger equation and find interesting and chemically significant relativistic effects (see, e.g., Pyykko 1988). Others go beyond the Born-Oppenheimer approximation and call this non-Born-Oppenheimer chemistry (Jasper et al. 2004). Thus, there is no principal difference between using the ‘exact’ (actually approximate) Schrödinger equation and the Born-Oppenheimer approximation. The difference is, first, in the numerical accuracy that can be ensured, and, second, in the possibility of developing a qualitative interpretation and understanding by different approximations. These two features are in a complementary relation, as discussed below in Section 4.
Moreover, the term ‘artifact’ gives the impression of something artificial and subjective, not directly related to nature. This meaning is misleading. The Born-Oppenheimer approximation directly reflects the specific nature of molecules as quantum systems, namely, the fact that molecules consist of heavy particles (atomic nuclei) and light particles (electrons). The ratio of masses governs the accuracy of the approximation. The constitution of molecules and the ratio of masses are all objective properties that in no way depend on the researcher’s will. In this regard, the Born-Oppenheimer approximation is dictated by nature, in a similar sense as quantum properties of microparticles are.
While myriads of approximations are feasible a priori, only a few of them are valid (applicable). This is not by accident, but because the latter ones reflect some important features of nature. These approximations reflect nature just as the ‘exact’ equations do, albeit in a different way. They reflect the more qualitative side of nature, whereas more exact theories tend to reflect quantitative aspects; but both sides are objective and not invented by researchers.
Del Re (2003) has suggested to switch to a more acceptable terminology and talk about the Born-Oppenheimer theorem instead of approximation. The theorem could read as: ‘In the limit me/M0 the Born-Oppenheimer scheme of calculations provides an exact result, and thus nuclear and electronic motions are completely separated.’ (Here, me is the electron mass and M is the characteristic mass of atomic nuclei.) The formulation could even be proved in a mathematically rigorous way. However, the problem is that in reality the ratio me/M is not zero, although fairly small (me/M 1/1837 if the proton mass is chosen for M); this value is given by nature and cannot be varied. Therefore, the Born-Oppenheimer scheme for finite me/M inevitably remains an approximation, although it is well supported by the Born-Oppenheimer theorem. The example shows that, even if some exact mathematical results are available, they do not allow avoiding approximations in practical physical or chemical applications.
It might be mentioned that on a somewhat deeper level the ratio of the characteristic velocities of the particles is physically more relevant than the ratio of masses. Smallness of the velocities ratio serves as a basis for the adiabatic approximation that is in principle different from the Born-Oppenheimer approximation, although close in some aspects.
In molecules the characteristic velocities of electrons are usually much higher than those of atomic nuclei. However, in some molecular states the electrons are highly excited and might have velocities comparable to those of the nuclei or even lower. For these highly excited states the Born-Oppenheimer approximation becomes invalid. Such molecular states are less important to chemistry, although they play a significant role in atomic physics. Deviations from the Born-Oppenheimer approximation also occur when there are several equilibrium configurations of atomic nuclei (i.e. several minima on the potential surface) separated by potential barriers of moderate height. In this situation non-rigid molecules emerge, which are related to the issue of molecular shape discussed in the next subsection. The presence of several minima with the same depth is inevitable if a molecule contains two or more identical atomic nuclei. The permutation of these particles physically corresponds to the tunneling between different potential wells. The rate of such processes is usually very low, which explains why related effects are extremely small. In any case, they can be described within the general framework of the adiabatic approximation, so that the first principles of quantum mechanics are not violated.
The manifestations of the Born-Oppenheimer approximation are apparent in experimental observations, also outside of chemistry. In molecular spectra we see vibro-rotational bands, and not just chaotic sets of lines that would appear in the spectra of general multi-particle systems. This is visible evidence that the Born-Oppenheimer (approximate) separation of nuclear and electronic motion is a feature of nature and not some wishful invention of researchers. Of course, to understand that the band character of a spectrum has this meaning requires some scientific qualification. But this is inevitable in modern science. Below (Section 3.4) we return to the issue of the observability of ideal artifacts.
After this elucidation one might finally agree with Garcia-Sucre and Bunge (1981): an artifact (i.e. science created by human beings) does represent some important objective properties of nature. This is exactly what science is about, and there is nothing particular about the Born-Oppenheimer approximation here. Of course, some deeper questions might be pursued further. For instance, the eminent physicist E. Wigner (1995) was puzzled by "the unreasonable effectiveness of mathematics in the natural sciences". But the issue of the Born-Oppenheimer approximation does not present anything specific in this respect.
3.2. Molecular shape
A subject closely related to Born-Oppenheimer approximation is the issue of molecular shape. As already mentioned, in quantum theory a molecule might be ascribed a definite shape by using the Born-Oppenheimer approximation. Within this approximation one has first to replace atomic nuclei by force centers fixed in space and then to solve the quantum problem of the molecular electrons for varying sets of nuclear coordinates. The solutions provide a potential surface that depends parametrically on the coordinates of the atomic nuclei. The next step involves locating minima on the potential surface that indicate the (equilibrium) positions of nuclei in a molecule. At this stage of the approximate construction, a definite molecular shape emerges.
The status of molecular shape has induced much concern among philosophers. For instance, Ramsey (1997) remarks, "… shape is widely thought to be a physical as well as a chemical attribute of the world …" The paper contains an interesting discussion, but the cited statement expresses the origin of many philosophical misunderstandings. Indeed, the term ‘attribute’ is usually understood to describe some indispensable property of matter. The most popular examples are space and time: matter is invariably described in terms of space and time; but in no way does this apply to shape. The situation was well recognized already in antiquity: solid bodies have a shape and a fixed volume, while liquids possess only a volume, but no definite shape. As for gases, they have neither shape nor intrinsic volume, but fill any volume available. This trivial counterexample invalidates such statements as "Classically every physical thing has some geometry or other, but in the quantum theory the notions of spatial structure, shape, and size seem to become hazy if not outright inapplicable" (Garcia-Sucre & Bunge 1981). There is no need to resort to modern quantum physics to discover that some material entities (liquids or gases) do not have a shape of their own, but that the shapes are dictated by the environment (vessels). Interestingly, the analogy with molecules might be pursued further. When a molecule freely rotates, its field is averaged and the shape is not manifest. For instance, there are dipole molecules (such as H2O), but a freely rotating molecule (in a stationary state with definite values of rotational quantum numbers) cannot possess a dipole momentum. When an external field is applied, or some other molecule approaches, a molecule becomes oriented in space, and its shape or dipole momentum become clearly exhibited. The fact that the molecular shape is recovered only under the perturbation by some external agent has induced some hesitation among philosophers, but we see from our example that such a situation is not unusual even in elementary classical physics.
Of course, the analogy is incomplete, since a molecule possesses some (maybe hidden) shape of its own while a gas or a liquid fits any shape. But since shape is not an attribute, it is not surprising that the molecular shape might remain latent in some situations (freely rotating molecule) while in other contexts it plays a crucial physico-chemical role (for instance, in the X-ray structural analysis of molecules oriented due to some reason, for example, by the surroundings in crystals).
Since shape is obviously a transient property in the macroworld, there is no reason to anticipate that the situation would be different in the microworld. Many phenomena in chemistry are well understood in terms of rigid structures of atoms with definite shapes. However, this cannot be a reason for treating shape as an absolute property in the realm of molecules. Physics clearly shows the limited applicability of the notion of shape in systems of several quantum particles. A shape emerges if two or more particles (nuclei) have masses much larger than the masses of other particles (electrons); in this situation the Born-Oppenheimer approximation is valid, see Section 3.1.[4]
A shape arises as a result of an approximation, and this is a common situation in the structure of the ‘exact’ sciences. A multitude of physically very useful and appealing concepts arise as a result of approximations to ‘exact’ physical equations; but they are not applicable to the most general systems or situations. Making some approximation-based concepts absolute without justification is a dangerous pitfall, both in practical and philosophical regards. On the contrary, the recognition of the approximate character of concepts does not denigrate them, but reminds us of the existence of applicability limits. Approximation-induced concepts remain illuminating and constructive, although one has to bear in mind the limitations of their use. In the case of chemistry, the limitations might be inferred from physics. This is a typical situation, since physics treats the basic properties of matter in a very broad scope of conditions (potentially it pretends to treat matter in any situation), whereas chemistry focuses on a limited range of conditions and studies the subject matter in more detail, especially concentrating on the structure and transformation of compounds. Chemical compounds cannot exist under certain conditions, for instance, in hot plasmas in the interiors of stars. Therefore it is not surprising that such a property as shape gradually looses its significance in some situations, outside the scope of chemistry. Atomic and molecular physics is a scientific discipline that studies atoms and molecules from a broader perspective, beyond that of chemistry. The outlook provided by this branch of science is useful when philosophical problems of chemistry are analyzed (see some further comments in Ostrovsky 2003a).
3.3. Orbitals
Orbitals appear in the theory that provides approximate solutions for Schrödinger equations of systems with a number of interacting particles larger than two. Many textbooks provide detailed descriptions of the theoretical scheme. A brief exposure suitable for general discussion is given in Ostrovsky 2001, 2003b, and 2004, and will not be repeated here. However, it can be clearly stated that, contrary to some claims, the scheme to construct orbitals lies fully within modern quantum theory (without resort to classical trajectories) and does not violate its general principles, such as the non-distinguishability of electrons. The key physical approximation in the scheme is that any electron moves in the mean field produced by the averaged motion of other electrons and nuclei. On the one hand, this physical image can be cast in the mathematical form of equations; on the other hand, it is very useful for developing explanatory patterns for many phenomena in atomic and molecular physics as well as in chemistry. Methodically the approximation is developed along the lines normally used in theoretical physics. It has no particular features that would justify the introduction of a special term, like ‘floating model’.
The notion of orbitals has attracted much philosophical attention in recent years. Scerri (2000) describes the situation in theoretical physics and quantum chemistry as follows:
According to accepted current theory atomic orbitals serve merely as basis sets – that is, as types of coordinate systems that can be used to expand mathematically the wave function of any particular physical system.
Thus, it is said sometimes that the continuing value of orbitals lies in their serving as a basis set, although the orbital model is an approximation in a many-electron system. The problem is that these two statements contradict each other. The same object of a theory cannot simultaneously serve as a basis and as an approximation. A basis in a Hilbert space is analogous to a coordinate frame in geometry. If we consider a point on a plane, we can characterize its position in rectangular, polar, parabolic, elliptic, etc. coordinate frames. All the frames provide equivalent information, and neither of them is approximate;[6] one frame can be only more convenient than the others, depending on the particular problem. The origin of the misconception lies in confusing basis functions ηj(r) (which in principle are arbitrary) with orbitals φ(r) that are expressed via the basis functions:
φ(r) = j cj ηj(r) (1).
The expansion coefficients cj are found by solving approximate equations; for instance, the Hartree-Fock equations based on the mean field approximation. The equations depend on the specifics of a physical system (molecule) under consideration and thus bear the basic physical information about it (for instance, the number of particles, the type of interaction between them, the presence of external fields, etc.). So do the orbitals. One can replace the basis set ηj(r) by some other set, which results in a different set of coefficients cj, but the orbitals φ(r) remain the same. The latter statement is mathematically exact when both basis sets are complete and thus infinitely large. In practice the basis sets are finite, such that computational chemists or physicists have to check the convergence. This is a purely technical business, inevitable in any application of numerical mathematics to a real problem – the case of orbitals does not bear any specifics.
Once the distinction between basis functions and orbitals is clarified, the puzzling situation described above is resolved: the basis functions ηj(r) are indeed ‘without physical significance’ and might be chosen at the researcher’s convenience. However, the orbitals φ(r) obtained via solution of physical (albeit approximate) equations ‘lie in the heart of much of quantum chemistry’, ‘as all computational chemists and physicists are aware’.
It is true that, "the term ‘orbital’ is a highly generic one. It is used to describe hydrogenic orbitals, Gaussian orbitals, natural orbitals, spin orbitals, Hyleraas orbitals, Kohn-Sham orbitals, and so on" (Scerri 2001). Sometimes the terminology might be too loose and thus misleading to blur the distinction between basis functions ηj(r) and physical orbitals φ(r). For instance, ‘Gaussian orbitals’ are in fact always basis functions. For physical orbitals, it does not matter if they are constructed as a superposition, according to equation (1), of Gaussian basis functions, or if some other functions, say, Slater functions, are employed for this purpose. To non-specialists that distinction is not obvious and could lead to unjustified bulk statements such as ‘it does not matter whose orbitals are selected from the modern palette of choices since none of them refer’.
Further on, there is no ground to say that "the scientific term ‘orbital’ is strictly non-referring with the exception of when it applies to the hydrogen atom or other one-electron system". In fact, as already indicated (Section 2), the Schrödinger orbitals, strictly speaking, are not exact even for the one-electron hydrogen atom, and the Dirac orbitals are not exact as well. Therefore both the hydrogenic orbitals (i.e. the hydrogenic wave functions) and the orbitals in a multi-electron atom are approximations. In this regard, the term ‘orbitals’ is in both cases ‘strictly non-referring’, although that terminology is hardly appropriate, because it underestimates the physically justified approximation. Along these lines, it is worthwhile to correct such statement as ‘atomic orbitals are mathematical constructs’, in order to make it acceptable. Orbitals are not mathematical constructs, since they bear physical information; they are constructs of theoretical physics, or ideal artifact in the sense discussed above. In this respect they are not worse than ‘exact’ wave functions.[7]
In which sense then do orbitals exist? Here one can turn to the paper by Ogilvie (1990) entitled ‘There are no such things as orbitals’. In different terms, but equivalently, the author’s viewpoint might be cast as: orbitals are ideal artifacts. Then the preceding discussion of ideal artifacts fully applies. Orbitals do not exist in nature, just as ‘exact’ wave functions or the Schrödinger equation do not exist in nature: these are all creations of the human mind. Orbitals appear as a result of approximations, just as ‘exact’ wave functions (solutions of the Schrödinger equation). Better approximations are known in both cases, which ensure improved numerical results for quantitative comparison with experiments. Nevertheless, orbitals are important and in wide use for several reasons. First, they reflect some important qualitative features of nature and thus provide an instructive physico-chemical insight. Second, they ensure reasonably good quantitative descriptions because of that. Third, orbitals technically serve as a convenient basis for further quantitative refinement of theory. Orbitals are not the result of wishful thinking of theoreticians, but stem from a very physical idea, namely, that an electron motion proceeds largely as if an electron moved in the mean field of other electrons and atomic nuclei. It is important to stress that the orbital picture provides a useful guideline for developing the numerical schemes. Some of the most accurate numerical schemes do not explicitly use the orbital picture and rely on the ‘brute force’ of the computers. The highest numerical accuracy is achieved in this way (for simpler atoms and molecules), but the qualitative understanding is inevitably lost. This is a manifestation of the complementarity discussed in more detail below (Section 4).
The orbital approximation plays a key role in the quantum explanation of the Periodic Table of Elements. This application of orbitals was thoroughly discussed in previous publications (Ostrovsky 2001, 2003a, 2003b, 2004). Here, I only want to indicate that attempts to discard the validity of the modern quantum explanation of the Periodic Table have been mostly based on the mere indication that the orbital picture used in this explanation is approximate. These arguments have been rejected, since an explanation requires the creation of a qualitative image that is usually done by using approximations (see Section 4).
3.4. Approximations and observability
Now I turn to another important question: can orbitals be observed? This is actually an instance of the more general question: can ideal artifacts be observed? Of course, the manifold of ideal artifacts needs to be limited: for instance, a centaur is an ideal artifact beyond the scope of our discussion. Here we discuss only physical ideal artifacts, which is just another name for approximations. As ideal entities, they cannot be observed in the most direct sense. At the same time, if we consider a valid physical approximation as being based in nature, it is manifested via phenomena of nature, and in this sense it is observable. I will call this semi-direct observability. Just in this respect the Born-Oppenheimer approximation or molecular shapes are observable, as discussed in Sections 3.1 and 3.2.
In our everyday life we observe effects and phenomena that are directly related to scientific ideal artifacts. Consider, for example, a shadow. Do shadows exist? Indeed, an unambiguously defined shadow exists only in geometrical optics, which is an approximate theory. The advanced theory of wave optics provides a better approach according to which an absolute shadow does not exist because of diffraction. In other words, it is impossible to define the boundaries of a shadow rigorously, since diffraction fringes appear near the boundaries. For a spectacular presentation of this situation we refer to the figure at the beginning of chapter 10 in a standard textbook of optics by Hecht (2002). It shows a shadow of a human hand holding a dime, illuminated by monochromatic laser light that allows discerning the fringes at the edges of this macroscopic shadow. The lower part of the figure shows the same phenomenon in the microworld, with electrons diffracted on a zinc oxide crystal. Diffraction phenomena depend on various parameters (the light wavelength, the size of the obstacle, the position of the observer), but in principle the phenomenon persists, whereas geometrical optics with its well-defined shadows is only an approximation.
Thus, in physical terms a shadow is an approximation. (‘There are no such things as shadows’, Ogilvie would say). In everyday life we observe shadows and have no problems to identify them, which is often due to the low resolution of our visual sense. This fact clearly demonstrates that a reasonable, physically justified approximation might be used to describe something real (within the limits of its applicability) and might be perceived by direct observation. The essence of this example is not so far from chemistry as one might imagine. It concerns the relation between the classical (geometrical) description and more general theories that include wave (quantum) features.
Orbitals have systematically been observed for a long time, but in the energy representation.[8] For instance, in the measured photoabsorption cross sections the prominent peaks appear as a result of photoionization from a particular orbital in an atom or molecule.[9] Consider, for example, figures 1 and 2 in Chung 2004.[10] They show the cross section of the photoionization (i.e. essentially the yield of photoelectrons) of a lithium atom. When the photon energy is high (Eph 60eV), the photoionization of valence electrons has a very low yield that smoothly depends on Eph. Superimposed on this background are the sharp and high peaks that are interpreted in terms of photoionization via intermediate resonance states. Each of these states corresponds to the excitation of two atomic electrons to various unoccupied orbitals as detailed in the figures. The doubly excited states eventually decay with the emission of an electron that contributes to the photoelectron yield. Thus, the explanation of a prominent structure in the experimental observation is achieved solely within the orbital picture; and there is no way to do it without. In this sense we can say that orbitals are observed in the experimental data, albeit in this case on the energy scale, or in the energy representation.
In quantum mechanics a physical system is described by a wave function that can be represented in various ways. The space coordinate representation is probably most often employed. It provides standard probability densities in the coordinate space.[11] Along with the coordinate representation, the momentum representation of a wave function is frequently employed in theory. Various experiments directly measure the electron momentum distribution, which is represented by probability densities in the momentum space. In many cases, energy spectra provide the most convenient and direct way to describe a physical system. Nowadays virtually nothing is directly observed in physical experiments, but complicated experimental devices provide ‘raw’ data that need to be processed.[12] There is no fundamental reason to prefer an observation in the coordinate representation to an observation in the momentum or energy representation; and in the latter representation, as indicated already, the orbitals have been observed long ago.
The observation of orbitals in conventional coordinate space can be inferred from a recent experiment (Zuo et al. 1999). Much of philosophical criticism has followed. Meanwhile the imaging of orbitals by various experimental techniques has become commonplace (Feng et al. 2000, Litvinyuk et al. 2000, Brion et al. 2002, Hatani et al. 2004). I will not go into details of the interpretation of these experiments. The experiments were carried out using modern state-of-the-art sophisticated techniques and their analysis should be done in a physical or chemical publication, but not in a philosophical one. In some particular cases, the interpretation of an experiment can be doubtful; for instance, a critical analysis of the experiments by Zuo et al. (1999) was carried out by Wang and Schwarz (2000a, 2000b) and Zuo et al. (2000). I just want to indicate that the experimental observation of orbitals cannot be rejected on general philosophical grounds,[13] because there is no principal objection to the observation of orbitals based on approximations in quantum theory.
3.5. More on orbitals
Scerri (2001) devotes a significant part of his paper to emphasize the approximate status of orbitals in order to conclude only that this aspect is hardly relevant to the reality of orbitals:
[…] the fact that orbitals might only provide an approximation to the motion of many-electron systems is not a sufficient reason for the complete denial that they or something related to orbitals can possibly exist.
Therefore, he puts forward two more arguments to support the idea that orbitals are in principle not observable. However, both arguments refer not only to orbitals, but also to the ‘exact’ Schrödinger wave function.
The first argument is related to the well-known fact that the wave function ψ(r) is generally complex-valued, ψ(r) = |ψ(r)| exp[i φ(r)], so that its full description requires information not only on its modulus |ψ(r)|, but also on the phase φ(r). Complex-valued functions appear in quantum mechanics when two (or more) stationary states are populated coherently, or when the system is non-stationary (i.e., when its Hamiltonian is time-dependent), or when a magnetic field is present. It is also known that the phase φ(r) is trivial in the case of stationary (bound) states (which were actually the object of experimental analysis) and in the absence of a magnetic field. The phase depends linearly on time t and not on the electron coordinate r: φ = -iEb t + a. Here Eb is the bound state energy and the constant a is independent of r. This constant is insignificant since it does not influence any observable. Therefore, the phase can be treated as non-physical and neglected, such that the wave function may be considered a real-valued magnitude. A somewhat more complex situation emerges in the case of degeneracy, but this consideration can be restricted to real-valued functions.
Many experiments probe the electron charge density ρ(r) that is proportional to the probability density |ψ(r)|2, ρ(r) = |ψ(r)|2, where e is the electron charge. Bearing in mind that the wave function phase might be omitted, one has to carry out only the square root operation, ψ(r) = ± [(1/e) ρ(r)]1/2, to restore the wave function from the electron density. Here the symbol ± requires some attention, since in general even a real-valued wave function oscillates around zero and thus is positive or negative in different domains of space. The dividing boundaries are known as the nodal surfaces. Each crossing of a nodal surface means a change of the wave function sign. The nodal surfaces [i.e., the zero value surfaces for the density ρ(r)] might in principle be defined from experiments. Then the wave function can be fully restored from the observable charge density. This might be considered a semi-direct observation, albeit not a direct observation of ψ(r) in the strict sense. However, as already stressed, in modern experiments virtually nothing is directly observed and some processing of raw data is always required. With this in mind, we may conclude that there are no theoretical obstacles to the semi-direct observation of wave functions of stationary states.
The second argument reads:
[…] atomic orbitals are described in a many-dimensional Hilbert space which denies visualization since we can only observe objects in three-dimensional space. [Scerri 2001]
This point reveals a misinterpretation. The Hilbert space theory is a mathematical apparatus that has found useful applications in quantum theory, but which is in no way limited to it. Any regular function, for instance, any function of a coordinate, might be regarded as a function belonging to some Hilbert space. For example, the electron density might be considered as belonging to a Hilbert space, but this does in no way preclude its observability. When the Schrödinger equation is solved, the eigenfunctions can be regarded as elements in an infinitely-dimensional Hilbert space; but they are simultaneously defined in the conventional three-dimensional space.
A more reasonable point to consider is the fact that a wave function is defined in the configurational space. The latter is three-dimensional for a single electron, which allows visualization of the probability distribution. For two electrons the configurational space is already six-dimensional, and the complete probability distribution ρ(r1r2) = |ψ(r1r2)|2 cannot be visualized.[14] The charge distribution is obtained by wave function convolution. For an N-electron system the electron density is
ρ(r) = dr2 dr3 … drN |ψ(r, r2, r3, … rN )|2 (2),
where rj is the coordinate of the jth electron. The formula suggests that the wave function cannot be exactly restored from the electron density. In terms of atomic orbitals this is reflected in the fact that in a multi-electron system all the orbitals filled by electrons contribute to the observed charge distribution. In order to separate the contribution of a single orbital, the experimentalists (Zuo et al. 1999) used a special technique critically analyzed in the subsequent discussion (Wang & Schwarz, 2000a, 2000b; Zuo et al. 2000). These developments are beyond the scope of the present study, however.
To conclude this section, it should be stressed that the orbital approximation, as any other approximate or ‘exact’ theory, has its limitations. The applicability of the orbital picture reflects objective properties of atomic and molecular states, which are not universal. For instance, for some doubly (or multiply) excited states the electron motion is strongly correlated and the mean field picture does not hold even as a first-order approximation (for a bibliography see Prudov & Ostrovsky 1998). Some examples of the orbital picture breakdown were discussed previously (Ostrovsky 2001, 2003b); often they belong to atomic physics rather than to chemistry. However, the applicability domain is still large enough for orbitals to be ‘in the heart of most of quantum chemistry’.
3.6. Rejecting the existence of orbitals
There are various possibilities to reject the existence of orbitals on philosophical grounds. To start with, some philosophical systems deny the reality of an objective material world, i.e. nature. Then the orbitals are rejected as a part of it.
Another possibility is based on the philosophical distinction between properties and things, or properties and substances. It is impossible to object to statements like ‘An orbital as such is not observable; what is observable are its properties’. I would like to indicate only that there is nothing special about orbitals. Any physical experiment implies observing (measuring) some properties of the object of study. The object as such is never observed – if one does not hold to the naive view that observing something means seeing it by someone’s eyes. For instance, only the properties of molecules are observed, but not molecules as such – large molecules became accessible to some kind of experimental ‘viewing’ only recently, and in any case this is not viewing by someone’s eyes. Such a situation provides a basis for skepticism that could last for a long time, as the widely known example of the prominent physico-chemist Ostwald shows. Skepticism is a legitimate constituent of scientific approach; the point is that there is no fundamental difference in this respect between orbitals and molecules (the idea of this particular analogy belongs to W.H.E. Schwarz).
Yet another possible type of objection could be as follows. Imagine that someone attributes a peak in the photoelectron spectrum to the ionization from a particular electron orbital, and then quantitatively describes the peak position based on the orbital calculations. A skeptic is not convinced but says that from the very beginning the scheme of calculations already presupposes the orbital picture. Again, this argument is not specific to orbitals or any other approximate scheme, but in fact refers to the conventional physics approach. For instance, when the energy levels of a hydrogen atom are calculated, it is presumed that the stationary states and the energy levels exist and that they correspond to regular solutions of the Schrödinger equation.
Note that the fact that the theory operates with occupied (actual states) and non-occupied (potential states) electron orbitals is also not specific to the orbital approximation. ‘Exact’ quantum mechanics considers a variety of stationary states of any quantum system (for instance, an atom or a molecule) that are only potentially populated. For an atom in the ground state, all the excited states are potential states that might be excited under external perturbation.
3.7. Further examples of approximations
In this subsection we give two further examples of approximations that seems to be of interest in the present context.
According to modern theory, a chemical substance as common as water is only approximately stable. "Indeed, let us consider the system consisting of ten electrons, ten protons, and eight neutrons. These constituents can produce a water molecule or a neon atom with 18Ne nucleus" (Belyaev et al. 2001). The probability of such a molecular-nuclear transition from the water molecule to the neon atom is expected to be very small from general considerations. However, it is enhanced due to the presence of a particular resonance state in the 18Ne nucleus. At present it is difficult to evaluate the lifetime of water theoretically, and special experimental searches have not succeeded in detecting the reaction. Nevertheless, there is no rigorous conservation law that forbids such decay, and, as a general rule of quantum mechanics, everything that is not forbidden by strict selection rules proceeds with some probability.
In hydrodynamics an approximation of an ideal liquid has been employed for quite some time. It implies the neglect of liquid viscosity and hence the energy dissipation emerging due to viscosity. Originally this approximation was inspired by its mathematical simplicity and beauty. In reality viscosity becomes important in the boundary layer along the surfaces that limit the liquid flow. Garcia-Ripoll and Perez-Garcia (2001) state, "John von Neumann noticed that most mathematical models of the date [around 1900] did not take viscosity into account and thus could not explain the features of real fluids. He coined the term ‘dry water’ to refer disrespectfully to those idealized models that did not care take account of dissipation (R. Feynman, 1964). Bose-Einstein condensate represents an experimental realization of such a ‘dry fluid’ or superfluid". This example teaches us that approximations have sometimes a particular fate. Starting as mathematical playgrounds, they can eventually find a manifestation in unusual states of matter. This is yet another case of "the unreasonable effectiveness of mathematics in the natural sciences" (Wigner 1995).
4. The epistemological value of approximations
A professor in theoretical physics at St. Petersburg State University used to say to his students, ‘Imagine how poor, scarce and insufficient our knowledge would be if we knew only exact wave functions’. At first glance, that appears paradoxical. Indeed, according to quantum theory, the wave function contains all the information of a physical system and allows calculating any physical observable. Nevertheless the saying contains an ultimate truth. Knowledge of only numerical values is insufficient for understanding, because explanations are most often cast in terms of the qualitative images induced by approximations.
The term ‘explanation’ has several meanings. Quite often it is used to denote a deduction from a more general theory. However, it seems that the term ‘prediction’ is more appropriate than ‘explanation’ in this situation. In quantum measurements ‘explanation’ is often understood as a mapping from the quantum physics of the actual system onto the classical point of view of an observer. However, we believe that researchers in quantum mechanics develop a special kind of ‘quantum intuition’ that allows a direct understanding of quantum objects without appeal to classical analogues (see, e.g., Zakhar’ev 1996).
The ‘exact’ equations for complicated physical systems provide only limited insight. Few general theorems can be proved rigorously, like the probability conservation for the Schrödinger equation, but only very restricted possibilities are available to create qualitative images and patterns. In our pursuit of exploring nature we need both quantitative information and qualitative understanding. It is useless to ask which of the two is more important; both aspects are essential. However, we cannot obtain both fruits in a single approach. When we make our numerical schemes more and more sophisticated, the physical meaning becomes non-evident and only numbers emerge from the computer black box. On the other hand, approximate models provide a qualitative and often semi-quantitative description, though not of highest precision.
Another example of very useful images created by approximations is the theory of chemical exchange reactions (without electronic transitions) viewed in terms of motion along a potential surface. This approach provides much understanding and is a quantitatively reliable tool, although it is based on an approximation, namely the Born-Oppenheimer approximation discussed in Section 3.1. My point is that in the preceding sentence it would be reasonable to replace ‘although’ by ‘because’. As pointed out by Del Re 2003, some researchers believe that the Born-Oppenheimer approximation has ‘no physical content’. My position is just the opposite: physical sense emerges in the frameworks of approximations.
Modern researchers, when obtaining some numbers from their computers, frequently remain dissatisfied and seek for the physical sense of the results. While it is difficult to provide a complete definition of what ‘physical sense’ means, it implies, to a significant extent, the capability to interpret the numerical results in terms of simple models and qualitative images. All this comes from approximations and models.
Approximations and models are a fully legitimate part of a theory, and not its temporary, abominable, and shameful part. Every textbook in quantum mechanics includes some simple problems, such as bound states in one-dimensional potential wells, scattering on a potential barrier, harmonic oscillator, hydrogen atom, etc. Most of these problems are included not because they provide an accurate description of nature, but because they allow students to understand important qualitative quantum concepts, such as the shape of the bound-state wave function, the tunneling phenomenon, the above-barrier reflection, etc. The basic approximations and the simple model problems with easily grasped properties form an appropriate language to develop explanations of more complicated situations. Of course, as with every language, such explanations are addressed to a knowledgeable audience.
The current progress in computer techniques makes the complementary relation between calculations and explanation even more important. Niels Bohr put forward the idea of complementarity first on the basis of physics where the complementarity between coordinate and momentum is expressed by Heisenberg’s uncertainty principle. According to this principle one can measure with an arbitrary precision either the coordinate of a particle or its momentum, but not both simultaneously. Bohr realized that this type of relation is very generic. He applied the complementarity concept to a broad variety of fields outside of physics, such as psychology, biology, and anthropology (Bohr 1999). This concept is epistemologically significant because it is about a very general pattern of relations between subject and object. As to the complementary pair of numerical calculations and explanations, numerical calculations seek to reproduce a physical object with the highest possible quantitative precision, whereas explanations appeal to a subject and rely on qualitative images (Ostrovsky 2001). This also means that an explanation appeals to a community of researchers with a common background only, which may be different in other communities. The complementary pair numerical calculations/explanations might be considered as a particular implementation of the more general pair quantity/quality.
If one’s objective is to obtain the best numerical results, then approximations are something to avoid or to limit as much as possible in the course of scientific progress: fewer approximations provide better numerical output. However, the bare ‘exact’ equations for a complex system provide very limited insight and are a barren ground for explanations. Explanatory concepts of high heuristic potential are born out of approximations. They inspire the intuition that is a powerful vehicle for the advancement of science. In Section 3 it was shown that key concepts of chemistry, such as molecular shape or molecular orbitals, directly emerge from approximations. If one seeks for explanations, then dropping some approximations might hopelessly destroy the entire framework. This, of course, does not necessarily mean that the same set of explanatory approximations or models would be retained forever. On the path of historical progress the models could be substantially modified or even completely new models could be developed. However, models and approximations remain a substantial and inevitable part of explanations of complex systems, and not some temporary deficiency.
5. Conclusion
Thus far I have discussed both objective and subjective features of approximations, and one might argue that approximations have either a subjective character or an objective one and that both cannot be true at the same time. However, the point is that these features are not manifested simultaneously and in the same meaning. It is worthwhile to summarize my view once again.
In many essential regards there is no basic difference between approximations and ‘exact’ equations. The natural sciences combine objective and subjective sides that are inseparable. On the one hand, the potential goal of science is to reflect nature in the most exact way, which means objectivity. On the other, science is created by humans and simply would not exist without the existence of subjects. Therefore science has inevitably subjective aspects. The technical aspects of the formulation of results and their dissemination, the particular ways of advancement in science, the existence of different although mostly complementary approaches – all these bear a strong flavor of subjectivity. Science cannot exist without such notions as understanding or intuition, which are clearly subjective. Science is a kind of interface between objects and subjects, and the same refers to its important part – approximations.
The term ‘approximation’ belongs to the well established and universally accepted terminology in the exact sciences, such that one should not change the terminology by replacing it with other terms, such as ‘theorem’. The latter has a different meaning and cannot substitute ‘approximation’. It is important to develop a proper meaning of the term ‘approximation’ and to appreciate its significance in all aspects of science, including ontological and epistemological implications.
Now I summarize the main points of this work.
• A physical theory should not be blamed for using approximations because approximations are ubiquitous in the ‘exact’ sciences. Only invalid, physically (and mathematically) unjustified approximations discredit a theoretical scheme, and an approximate approach should not inappropriately be extended beyond its applicability domain. Acknowledging the approximate character of a theory or an approach cannot terminate a scientific or philosophical discourse, but is only the beginning. The failure to recognize the approximate character of a notion leads to fallacious absolutization and philosophical confusion.
• A valid approximation is not a researcher’s subjective and voluntaristic construction, but a reflection of nature’s features; it is not inferior to ‘exact’ equations. Approximations reflect the more qualitative side of nature, while ‘exact’ theories tend to characterize its quantitative side. Valid approximations are deeply rooted in nature, in some sense they are observable via characteristic features of natural phenomena.
• The hierarchy of approximations creates a path (and probably a unique one) to scientifically constructed qualitative images, notions, and patterns that emerge from ‘exact’ equations. By basing studies on approximations, semi-quantitative and qualitative approaches are developed, which are invaluable in science, particularly in chemistry. Thus, approximations are the most precious fruits of theory, which should be considered in the philosophy of science.
• The ‘exact’ quantitative approaches and the intuition-inspiring approximations form a complementary pair in the universal sense of Niels Bohr’s complementary relations in nature and society. In this dual relation, the quantitative results represent the more objective side of nature while the qualitative approximation-induced images rest on the subjective side of the researchers’ interpretation of nature. Very often we progress in science via the development of approximate approaches.
The author is grateful to Yu. N. Demkov, J. B. Greenwood, G. Yu. Kashenock, W. H. E. Schwarz, and R. Vihalemm for useful discussions and to J. Schummer for careful reading of the manuscript and many useful advices.
[1] Among the notable exceptions I indicate papers by Fock (1936, 1974), Pechenkin (1980), Ramsey (1997), Del Re (2000), and Friedrich (2004).
[2] It should be recognized that the present author, as a practicing physicist, holds to realism and understands by ‘nature’ an objective reality, as opposed to the subjective observer.
[3] The distinction between approximations and models is an interesting and sometimes subtle issue not pursued here. However, one aspect could be indicated: approximations are derivable from more general (i.e. more exact) theories, while models are constructed in order to grasp some important features of physical reality. From this point of view, the non-relativistic Schrödinger equation is rather an approximation (since it is derivable from the Dirac equation) than a model.
[4] Born-Oppenheimer and adiabatic approximations are often not properly distinguished in the literature. In the rigorous sense, the Born-Oppenheimer scheme implies expansion of the total (electronic and nuclear) molecular Hamiltonian in terms of a small parameter that proves to be (me/M)1/4, which is much larger than the mere ratio me/M. In the lowest order of the approximation, the atomic nuclei are localized near their equilibrium positions and their motion proceeds in a harmonic oscillator potential. Anharmonicity appears in the higher orders of the approximation. Thus, the genuine Born-Oppenheimer scheme is inconvenient when the strongly anharmonic vibrational motion close to the dissociation limit is considered. Moreover, the scheme is fully inapplicable for the treatment of atom-atom (or atom-molecule, or molecule-molecule) collisions. The adiabatic approximation is devoid of these deficiencies.
[5] There are some other cases, not related directly to chemistry, where a composite quantum system exhibits some properties that are interpreted in terms of a shape. Some heavy atomic nuclei show rotational structures in their energy spectra, which is evidence of the non-spherical (ellipsoidal) shape of such nuclei. In the nuclei, all the constituent particles (nucleons) have comparable masses, and the spontaneous breaking of spherical symmetry cannot be explained via the Born-Oppenheimer approximation. A vibro-rotational structure was also found in the energy spectra of doubly excited atomic states (see, for instance, Prudov & Ostrovsky 1998 and the bibliography therein). Isolated atoms might have anisotropic properties that also do not rely on the Born-Oppenheimer approximation. For instance, the excited states of the hydrogen atom might possess an electric dipole momentum (so-called Stark or parabolic states). For an arbitrary atom the states with a non-zero total angular momentum J and definite projection MJ are magnetic dipoles. The states with > 1/2 have an electric quadrupole momentum etc. All these anisotropic properties are revealed by the application of weak external fields.
[6] Note that the truncation of a basis set is an approximation.
[7] A careless characterization of orbitals as ‘mathematical constructs’ sometimes appears even in the professional physics literature. The most recent example is Hatani et al. 2004. The wording (cursory used in the abstract) is in contradiction to the content of the paper which discusses a sophisticated experimental technique employed for the observation of orbitals.
[8] In quantum mechanics a wave function might be expanded over different basis sets. It is said that the set of expansion coefficients provides a wave function representation in a given basis. Thus, representation is a rigorously defined notion of quantum theory. All the representations contain equivalent information on the wave function. They are related to each other by unitary transformations. Among the most frequently used representations are coordinate, momentum, and energy representation; the latter one employs a basis of eigenfunctions of energy, i.e. the Hamiltonian operator.
[9] Note that not only the outer (valence) orbitals, but also the inner-shell orbitals might be probed in this way.
[10] The choice of the particularly recent review-type paper is rather casual, since observation and calculation of these types of phenomena have been carried out for decades.
[11] The actual experiment might measure the charge density of an electron cloud that is proportional to the probability density, see also Section 3.5.
[12] In the philosophical literature, some experiments are characterized as theory-laiden, implying that they are not trustworthy. Actually almost all serious current experiments are strongly theory-laiden. Of course, vicious circles are to be avoided and the applicability of theoretical formulations should be attentively controlled.
[13] Thus the philosophical criticism of the observability of orbitals was met with skepticism in the physics community.
[14] The configurational space is used to describe the motion of classical particles. For N particles it has dimensionality 3N. Nevertheless this does not preclude visualization of classical particles motion because classical objects are sharply localized, in contrast to quantum particles spread in space.
Belyaev, V.B.; Motovilov, A.K.; Miller, M.B.; Sermyagin, A.V.; Kuznetzov, I.V.; Sobolev, Yu.G.; Smolnikov, A.A.; Klimenko, A.A.; Osetrov, S.B. & Vasiliev, S.L.: 2001, ‘Search for Nuclear Reactions in Water Molecules’, Physics Letters B, 522, 222-6.
Bohr, N.: 1999, Collected Works, Vol. 10 (Complementarity beyond Physics), ed. D. Favrholdt, Elsevier, Amsterdam.
Brion, C.E.; Cooper, G.; Zheng, Y.; Litvinyuk, L.V. & McCarthy, I.E.: 2001, ‘Imaging of Orbital Electron Densities by Electron Momentum Spectrsocopy – a Chemical Interpretation of the Binary (e, 2e) Reaction’, Chemical Physics, 70, 13-30.
Chung, K.T.: 2004, ‘Resonances in Atomic Photoionization’, Radiation Physics and Chemistry 70, 83-94.
Del Re, G.: 2000, ‘Models and Analogies in Science’, Hyle – International Journal for Philosophy of Chemistry, 6, 5-15.
Del Re, G.: 2003, ‘Reaction Mechanisms and Chemical Explanation’, Annals of the New York Academy of Sciences, 988, 133-140.
Feng, R.; Sakai, Y.; Zheng, Y.; Cooper, G. & Brion, C.E.: 2000, ‘Orbital Imaging for the Valence Shell of Sulphur Dioxide: Comparison of EMS Measurements with Near Hartree-Fock Limit and Density Functional Theory’, Chemical Physics, 260, 29-43.
Feynman, R.P. & Leighton R.B.: 1964, Feynman Lectures on Physics. Electromagnetism and Matter, Addison-Wesley, London.
Feynman R.P.: 1992, The Character of Physical Laws, Penguin, New York.
Fock, V.: 1936, ‘Printzipial’noe Znachenie Priblizhennykh Metodov v Teoreticheskoi Fizike [Principle Significance of Approximate Methods in Theoretical Physics]’, Uspekhi Fizicheskikh Nauk, 16, 1070-83.
Fock, V.: 1974, ‘Printzipial’naya Rol’ Priblizhennykh Metodov v Fizike’ [Principle Role of Approximate Methods in Physics], in: Filosofskie voprosy fiziki [Philosophical problems in physics], Leningrad State University Publishing House, Leningrad, pp. 3-7.
Friedrich, B.: 2004, ‘Hasn’t it? A Commentary on Eric Scerri’s paper "Has Quantum Mechanics Explained the Periodic Table?", now Published under the Title "Just How Ab Initio is Ab Initio Quantum Chemistry’, Foundations of Chemistry, 6, 117-132.
Garcia-Ripoll, J.J. & Perez-Garcia, V.M.: 2001, ‘Vortex Bending and Tightly Packed Vortex Lattices in Bose-Einstein Condensates’, Physical Review A, 64, 053611 (1-7).
Garcia-Sucre, M. & Bunge, M.: 1981, ‘Geometry of a Quantum System’, International Journal of Quantum Chemistry, 19, 83-93.
Hatani, J.; Levesque, J.L.; Zeidler, D.; Niikura, H.; Pepin, H.; Kieffer, J.C.; Corkum, P.B. & Villeneuve, D.M.: 2004, ‘Tomographic Imaging of Molecular Orbitals, Nature, 432, 867-71.
Hecht, E.: 2002, Optics, Addison-Wesley, London.
Jasper, A.W.; Kendrick, B.K.; Mead, C.A. & Truhlar, D.G.: 2004, ‘Non-Born-Oppenheimer Chemistry: Potential Surfaces, Couplings, and Dynamics’, in: Modern Trends in Chemical Reaction Dynamics: Experiment and Theory, Part I, World Scientific, Singapore, pp. 329-391.
Litvinyuk, I.V.; Zheng, Y. & Brion, C.E.: 2000, ‘Valence Shell Orbital Imaging in Adamantane by Electron Momentum Spectroscopy and Quantum Chemical Calculations’, Chemical Physics, 253, 41-50.
Migdal, A.B.: 1989, Qualitative Methods in Quantum Theory, Addison-Wesley, New York (1st edition, 1977).
Ogilvie, J.F.: 1990, ‘The Origin of Chemical Bonds – There are no Such Things as Orbitals’, Journal of Chemical Education, 67, 280-289.
Ostrovsky, V.N.: 2001, ‘What and how Physics Contributes to Understanding the Periodic Law?’, Foundations of Chemistry, 3, 145-182.
Ostrovsky, V.N.: 2003a, ‘Physical Explanation of the Periodic Table’, Annals of the New York Academy of Sciences, 988, 182-192.
Ostrovsky, V.N.: 2003b, ‘Modern Quantum Look at the Periodic Table of Elements’, in: E.J. Brändas & E.S. Kryachko (eds.), Fundamental World of Quantum Chemistry. A Tribute to the Memory of Per-Olov Löwdin, Vol. 2, Kluwer, Dordrecht, pp. 631-74.
Ostrovsky, V.N.: 2004, ‘The Periodic Table and Quantum Physics’, in: D.H. Rouvray & R.B. King (eds.), The Periodic Table: Into the 21st Century, Research Studies Press, Baldock, UK, pp. 331-70.
Pechenkin, A.A.: 1980, ‘Priblizhennye Metody v Teorii Fizicheskogo Znaniya (Metodologicheskie Problemy)’ [Approximate Methods in the Theory of Physical Knowledge (Methodological Problems)], in: Fizicheskaya teoriya [Physical Theory], ed. Nauka, Moscow, pp. 136-153.
Prudov, N.V. & Ostrovsky, V.N.: 1998, ‘Vibrorotational Structure in Asymmetric Doubly-Excited States’, Physical Review Letters, 81, 285-8.
Pyykko, P.: 1988, ‘Relativistic Effects in Structural Chemistry’, Chemical Reviews, 88, 563-94.
Ramsey, J.L.: 1997, ‘Molecular Shape, Reduction, Explanation and Approximate Concepts’, Synthese, 111, 233-51.
Scerri, E.R.: 2000, ‘Have Orbitals Really been Observed?’, Journal of Chemical Education, 77, 1492-1494 & 79, 310.
Scerri, E.R.: 2001, ‘The Recently Claimed Observation of Atomic Orbitals and Some Related Philosophical Issues’, Philosophy of Sciences, 68, (Proceedings), S76-88.
Scerri, E.R.: 2003, ‘Löwdin’s Remarks on the Aufbau Principle and a Philosopher’s View of Ab Initio Quantum Chemistry’, in: E.J. Brändas & E.S. Kryachko (eds.), Fundamental World of Quantum Chemistry. A Tribute to the Memory of Per-Olov Löwdin, Vol. 2, Kluwer, Dordrecht, pp. 675-94.
Wang, S.G. & Schwarz, W.H.E.: 2000a, ‘On Closed Shell Interactions, Polar Covalence, d Shell Holes and Direct Images of Orbitals: the Case of Cuprite’, Angewandte Chemie International Edition, 39, 1757-61.
Wang, S.G. & Schwarz, W.H.E.: 2000b, ‘Final comments on the discussions of "the case of cuprite"’, Angewandte Chemie International Edition, 39, 3794-6.
Wigner, E.P.: 1995, ‘The unreasonable effectiveness of mathematics in natural sciences’, in: E.P. Wigner, Philosophical Reflections and Syntheses, Springer, Berlin.
Zakhar’ev, B.N., 1996, Uroki Kvantovoi Intuitzii [Lessons of Quantum Intuition], Joint Institute for Nuclear Research, Dubna.
Zheng, Y.; Rolke, J.; Cooper, G. & Brion C.E.: 2002, ‘Valence Orbital Momentum Distributions for Dimethyl Sulfide: EMS Measurements and Comparison with Near-Hartree-Fock Limit and Density Functional Theory Calculations’, Journal of Electron Spectroscopy, 123, 377-88.
Zuo, J.M.; Kim, M.; O’Keeffe, M. & Spence, J.C.H.: 1999 ‘Direct Observation of d-Orbital Holes and Cu-Cu Bonding in Cu2O’, Nature, 401, 49-52.
Zuo, J.M.; O’Keeffe, M.; Kim, M. & Spence, J.C.H.: 2000, ‘On Closed Shell Interactions, Polar Covalence, d Shell Holes and Direct Images of Orbitals: the Case of Cuprite. Response to the Essay by S.G. Wang and W.H.E. Schwarz’, Angewandte Chemie International Edition, 39, 3791-4.
Valentin N. Ostrovsky:
V. Fock Institute of Physics, St Petersburg State University,
198504 St Petersburg, Russia; Valentin.Ostrovsky@pobox.spbu.ru
Copyright © 2005 by HYLE and Valentin N. Ostrovsky |
05a034e80ebb9abb | I Drove The Tesla P85D, And Now Nothing I Drive Will Feel The Same Way Again
Growing up, life is full if a lot of “firsts” that we can somehow always seem to remember, be they influential on the rest of our life or not. My first kiss was a girl named Heather, my first car was a Mercury Villager minivan, and the first concert I attended was Ozzfest, the summer of 2001. This past weekend, I finally drove my first legitimate supercar, the Tesla Model S P85D, and now nothing I drive will ever feel the same way.
It’s not that I’ve never driven a fast car before, or that I haven’t “gone fast” either. The Saab 900 Turbo that was my second car took me me north of 140 MPH more than once, but it took a lot of road and time to get there. My 1995 Trans Am was in a different class from the Saab, leaving strips of rubber in my wake wherever I went and getting me into trouble with Johnny Law on more than one occasion.
But the Tesla P85D is so fucking fast you barely have time to register just how fast your going. You just look straight forward, go woah, maybe swear, and then laugh and smile. I don’t cuss often here, because when I do, I want it to leave an impression. This is one of those times. The Tesla P85D is just pure, confident acceleration on four wheels. It feels like a rollercoaster just launched you into a loop-the-loop. When my co-pilot switched it into Insane mode, it was like somebody forced Bruce Banner to sit through Indiana Jones and the Kingdom of the Crystal Skull. Things got really angry, really fast, in an extremely satisfying manner. That’s a Hulk reference for you not-nerds.
I’m not sure insane is even the right word for it. If I may be so bold, the Tesla P85D is the definition of awesome, a word so overplayed and overused (yes, I’m guilty) that it has lost all sense of what it used to mean. Google defines awesome as “extremely impressive or daunting; inspiring great admiration, apprehension, or fear.” That’s the Tesla P85D for you, all wrapped up in one word.
The backroads stuffed into the corner of Greenwich, Connecticut where my road test took place weren’t long enough to fully wring out the P85D, but I had more than enough room to test the much-publicized 0 to 60 MPH acceleration. I didn’t bring any official instrumentation, but it sure feels like 3 seconds, and in the span of a couple of breaths I found myself practically floating my way from 25 to over 75 MPH. Just, whoosh, suddenly you’re pushed back in your seat and going faster than you ever thought possible. It’s like going mad with power; suddenly anything seems possible. You feel invincible, especially after the presentation boasting of their 5.4 star safety rating (yes, really).
My wife came along for the test drive, and her reaction can be summed up as “Holy shit.” In public settings she’s usually pretty quiet and reserved, but after the test drive all she could do was ask how I was planning on affording one. All in due time, sweetheart. All in due time.
Let me throw a few more descriptive sentences your way. It’s like, Google search results on gigabit Internet fast. It’s smoother than anything you’d find in the Victoria’s Secret catalog. It was every bit as glorious as time I saw my first-girlfriend’s boobs (not Heather). It is more motivational than any Hollywood-scripted high school sports team speech you’ve ever heard.
If you told me I could feed a thousand hungry children for a year, or I could have a Tesla P85D if I drove it past those same starving children, I’d have to think about it. Like, really, really think about it. I know that makes me a terrible person, but it really is that good.
I’ve never felt more motivated to make absurd sums of money than I am now, if only to one day own a Tesla P85D. I’m not a materialistic guy; I drive a Chevy Sonic, I live in a 900 sq-ft house, and the little extra money I earn tends to go towards new experiences rather than things. I still love concerts and road trips, and covering events like the New York Auto Show. Sometimes, I get a free trip on an automaker’s dime, but more often than not, it comes out of my not-very-deep pockets.
Stuff doesn’t matter as much as experiences to me. What I’m trying to say is, the Tesla P85D is an experience. It has changed the way I look at all cars. It’s like finally turning 18, and all the cool stuff adults have been hoarding for themselves suddenly became available. This is not the end-all, be-all of automotive creation, but it is a revelation of sorts. Imagine where electric cars might be today if automakers hadn’t been so stubbornly defiant about not making it work.
There’s a part of me that feels bad for liking Tesla so much. I was a loyal and devout Ford fan for many years, and while I bounced back and forth between different brands, I always came back to Ford. I have a 1969 Mercury Cougar in my garage, and I’ve almost lost all interest in it. Even if I had an unlimited budget, I’d go talk to the guys at Bloodshed Motors before stuffing any sort of gas-powered engine back in there.
You can call me a Tesla fanboy, I really don’t care. If I did, I wouldn’t have lasted very long as a writer in the Internet age. I have my doubts that Musk can achieve the lofty sales goals he has laid out, and the Model X will be a good indication of whether Tesla was just a fluke, or Tesla can be a real player in a highly competitive industry.
It’s not like I won’t ever enjoy driving another car. Variety is the spice of life, and I’m always seeking another new “first” to mark off my list. My dream garage includes classic Mustangs and Porsche, modified Jeeps, lead-sled Mercurys, a lot of 80s and 90s cult classics from Japan, and in the real world I’ve owned a bunch of low-budget-but-interesting rides from a Nissan 240sx to my beloved Wrangler. I love cars, I really do. I have so many great stories with so many different cars.
I’ll always remember my Tesla P85D test drive with the same fondness that as my rebadged Nissan minivan, the first time I fell in love, and the organized chaos of an amazing live music performance. Punching the throttle of the the P85D gave me the same rush of energy and adrenaline, and every sports, performance, or supercar that comes next (may there be many more) will be judged by those first few moments.
What I felt was speed and excitement, my fears and anxieties left behind by the brutish power of the Tesla P85Das I charged headfirst into the future, no looking back.
About the Author
• BigWu
My response after driving the P85 for the first time (at pre-production test drive event): “Holey sh!t, if this is what crack is like I gotta stay away from the drugs!”
Nearly 3 years on, my eyes still dilate with glee when I recall the pure unadulterated joy of that ride.
• Arthur
You’re gonna need morphine and not crack if you spend too much time in that back seat….just to ease the pain.
• Kenneth Beck
Have you even ridden in the Tesla before? No? Wow, I must be a psychic.
• Arthur
No, you are more of a psychotic.
• Kenneth Beck
Haha, you apparently dont like it because things you dont understand annoy you. No problem. Nobody cares about your blantant obsession over hating something you dont know anything about. When you actually ride the car, Then you can talk, otherwise. You can shut up! Have a GREAT DAY
• TheLoneCoda
Hell, I still enjoy the torquey surge of my Coda. I’d probably lose my mind in a P85D.
• Arthur
Ummm…is that Skoda?
• TheLoneCoda
Nope. A 2013 Coda EV Sedan. One of the 500 built before they went out of business.
• Arthur
Looks like the motor is up front…much safer design than TESLA….but just as my Panamera can beat a P85D, my Golf TDi can probably beat your Coda in every way, especially efficiency…..my PORSCHE is miles ahead of the P85D in efficiency.
• TheLoneCoda
You have a troll-like distance from reality. I’m not going to even go into the innacuracies of your statements.
• Arthur
So you don’t care about the fact that the natural gas power plants that are the #1 source of electricity in Connecticut have the same efficiency as my diesel, or less? What this means is that (and even a troll can figure this one out) for every 5 gallons of fuel you burn at the power plant, you’ve got 2 equivalent gallons of e-juice leaving the power plant. Then your e-juice still has to go through miles of transmission power lines…more miles of distribution power lines….many transformers (each with 1%+ loss)…and even your battery charger has more than 10% loss….bottom line, you start with 5 gallons of gas at the power plant…. you’ll be lucky if you’ve got an equivalent of 1 1/2 gallons in your battery…and then your Coda itself is at best 85% efficient.
Like I said, my Panamera Turbo S is more efficient than Eman’s PRECIOUS P85D. The MPGe rating on EVs only accounts for getting the e-juice from the battery to the wheels, and does not account for…in any way shape or form…the very inefficient process of getting the e-juice into that high nickel content battery.
• TheLoneCoda
No. I’m not going to lay it out for you. Your mind is clearly made up. I would say that my electricity comes from the sun, and so your argument is moot, at least for folks like me.
• Knetter
Dont feed the troll. He’s clearly a complete idiot, look at the comment history: We have gems like this.
” So EVs and powerful plug-in hybrids are very popular among members of
our community that have an alternative life style. Now the first symptom
of negative health effects from EVs and powerful hybrids for adults is a
certain type of bleeding. So our alternative lifestyle friends will go
to their doctors in their EV or PHEV and complain about this symptom.
The doctors will tell them that it is due to their lifestyle….and they
will believe the doctors, who mean well and who think they are right,
and our friends will continue to use their EVs and PHEVs. So here I am, a
right-wing Christian conservative ”
” I think Dealers are an important part of the equation, especially for
safety. Most dealerships are privately owned, so when automobile safety
concerns arise, dealerships provide a separate entity that can make its
own judgments. Dealerships in many cases establish personal
relationships with clients and as a result will care a lot about the
well being of its customers. On the other hand, a corporation that
bypasses the dealers is in a very good position to cover up certain
dangerous issues, especially if these cannot be detected in the
automobile insurance claims statistics. A perfect example of this is
TESLA. “
• TheLoneCoda
• Arthur
Yeah….thanks for helping me get the word out to our fellow Americans.
You should really search my good buddy Bloomberg’s blogs..you might learn something young feller.
• Arthur
You left out some stuff, probably not intentionally….let me help you:
“So here I am, a right-wing Christian conservative, telling our
alternative lifestyle friends that their doctors are not aware of a
powerful carcinogen that is the most likely cause of their symptoms and a cancer that will take years to develop. And then there are all the nervous system disorders that take even longer to develop.”
• Arthur
How many trees did you chop down to have enough solar panels to supply your home and car with the e-juice, especially in the cold winters to heat your home? You know Connecticut is prime for growing trees and forests. Also, when you say….”would”….does that mean that mean “can” and that all the electricity you use comes from the sun? And yes I do know how to do the math with solar, and I am all in favor of placing it on roofs….but don’t you chop down my trees young feller.
• Knetter
You wont be here long. Have a good day troll. Ya old turd
• TheLoneCoda
So solar panels are made of wood are they, Abraham Lincoln?
• Arthur
No, solar panels need real estate, and so do trees. What that means is that you can’t have a tree and a solar panel in the same spot. I’m not sure why only a troll can understand that. Lots of God’s creatures depend on trees, and the more vegetation and forests you clear to make room for your solar farms, the more wildlife you kill, not to mention the fact that you’re depleting plants that convert carbon dioxide to oxygen.
We should be using roofs and deserts. The photo of your coda has lots of arable land in the background, telling me you don’t live in the desert.
And thank you for the best complement any old timer like me can get, but unlike Abe, I’m actually planning on going to hell to chase and wallop all the godless child experimenters till kingdom come. I just haven’t decided on the proper running shoes yet.
• Kenneth Beck
Everyone calls you out like an idiot for a reason…because you are. Learn science and technology and stop your draino snorting regiment. No wonder america has so many issues. Its morons like you that keep it from moving forward. Natural gas power production efficiency is higher no matter how you put it. It takes 5kwh of electricity to produce a gallon of gas. Also fossil fuels need to be tansported MANY times before it hits your tank. THEN your engine is only 25-40% efficient MAX and that is straight driving on highway. If the car sits and runs, its lower than that. You cannot refute those facts, and if you even try, it shows just how dumb you really are
• Maxwell Erickson
“Natural gas plants have the same efficiency as diesel” fine, but the vast majority of great people do not live in Connecticut, and they realize the Li-ion battery Tesla uses doesn’t use any sort of “high nickel content.” You’re thinking of LG Chem and their LiNiCo batteries. Oh, and if you want to talk efficiency at the user end “your Coda itself is at best 85% efficient” then come back when your precious Porsche can hit more than 55% efficiency on a good day at 43 mph.
Ahem. You also attempt to go into specifics about the loss of efficiency from the source of electricity to the end user, which is absolutely laughable, as the gasoline you’re using for the Panamera came from halfway across the flipping planet.
• Arthur
Our number one source of foreign oil is Canada….numbnuts.
I purchased and examined the same Panasonic batteries as the ones in the Tesla (except Tesla does not include the logic circuits to save cost)…and they are Nickel based….and Tesla will need to reach out to the Russians if Tesla wants to get Nickel cheap. Doubtful Tesla will get it for anything close to what China pays.
The United States, Japan, China and Europe tried to keep the space program alive in Russia by purchasing their very dependable rocket engines and by providing other funding to make sure the world’s scientific community does not loose the very valuable part of space research that is Russia. So along came Mr. Musk and tried to destroy all that trust and cooperation.
• Kenneth Beck
Hmmmm, you start with using fuel to pump oil out of the ground, use fuel to transport to refineries, you use fuel to refine the fuel, you use fuel to transport to gas station, then you use more fuel to pump the fuel into your gas tank, then your car is only 30% efficient at moving the car with the fuel. Sounds like it is way more efficient to me! LOL.
• Arthur
So the fossil fuel burned in our fossil power plants just rains down from heaven, like manna?
Again, 70% of our electricity is generated from fossil fuels. Most of the rest from plutonium (no, that’s not a dog breed).
Modern truck engines are about 50% efficient. My VW’s engine is over 40% efficient. Typical power plant is significantly below 40%……I usually round it up to 35%.
The best apples to apples comparison is a Nissan Leaf being fueled by a LNG plant to a Honda CNG….and the Honda will win. Let me know if you need to have this explained to you…I am willing to make an effort to make it as simple (and understandable for someone of your level of mental ability) as I can.
• Jared Banyard
Umm you are confusing some statistics. Fossil fuel power plants don’t use the same fuel that goes in gas vehicle. It takes a lot of energy to refine the fuel for gas cars, transport it, store it, and use it. You have to take all this into account. And the power grid is a mix of all kinds of electricity.
Electric cars are MUCH more efficient about taking the energy out of a battery and putting it to the wheels than gas vehicles. Probably talking 75% vs 25% on average.
You are also not counting the fact that many electric car drivers also have solar on their home.
I wouldn’t make such broad statements. Here is some light reading I googled:
• Kenneth Beck
Exactly what I was trying to get through his thick skull. It doesnt matter what you say or what facts you show, he is that adamant about his position that he blatantly will ignore your facts and state a stupid reason that shows he doesnt listen.
• Kenneth Beck
Ok, so completely ignore refining inefficiency, transporting fuel, then the efficiency you state for engines are much less then what you said. While coal plants, which by the way are being shut down across the USA for the high efficiency of a natural gas plant, are not that low in efficiency. But even if they somehow were, it would still be more efficient than a car that uses gasoline or diesel due to the huge losses from transportation and refining of the fuel. Every gallon of gasoline made in just the refining process wastes enough power to make my nissan leaf go 20miles! Look up the numbers, or just drive by a refiner and see the power sub station they need just to themselves for refining fuel.
• Arthur
Well then young feller, you can drive your PRECIOUS all you want but it will be a cold day in hell before you force me into that highly electrified contraption dreamt up by an atheist. Just give me the Lord of the Rings, an RS7, and you don’t have a prayer of keeping up with me around the racetrack. And as far as picking up children along the way, it’s best that you keep going and let me pick them up and drive them in something safe and not something that will give a powerline worker a headache if he or she sits in the back seat for too long.
• Maxwell Erickson
Please stop making Lord of the Rings references. I’m a JRR Tolkien fan, and I deeply resent idiotic ultracrepidarians desperately attempting to justify gasmobiles by telling yourself that, somehow, EVs are more carcinogenous than cancerous-smog-belching inefficient, overcomplicated pollution machines.
• Arthur
Boy….I was readin Tolkin books when you were still just an lectric impulse in your parent’s brains….good thin there was no magnetic blender in the form of a 400 horepower lectric moter to disrupt the impulse.
Now as far as the EVs go…the tailpipe is there…miles away..in a smokestack. I explained the losses a few of them blogs below. Also, smog is not a problem in the vast majority of American cities. Shanghai is so polluted on account of industry and all the other things made in Confucius’ land, not automobiles. We’ve got three times as many cars in NYC. Fact is, ever one new EV on the streets of Shanghai requires another tonne of coal burnt every some weeks at any one of the seventeen coal power plants in Shanghai….and oh..by the way…yer PRECIOUS’ batteries are Nickel based Panasonic NCR18650A made cheaper by removing the logic circuits.
The added benefit of powerful electromagnetic fields is that they don’t affect just a particular group of cells in your body like tobacco or benzene would…..they permeate the whole body and hit everything and are capable of affecting various chemical processes within your body, including the synthesis of tumor suppressor proteins. The rest of my blog below is optional readin material….
My other alias is U2art, and yes there is a musical connection. Oddly enough, the one number that always comes to mind when I think of U2 is the 8-ball ….on account of the garments that the Edge sometimes wore. When I met my favorite CEO last Zeptember, she was 53 years old….so 5+3=8. When I was picking up my Panamera in Manhattan last December, my favorite CEO was down the block doing an interview, still 53 yrs old. Later that day, I was in a NJ rest area in Montvale scanning through the wireless networks listening to a U2 tune….and the two numbers that stood out the most on the screen were 5 and 3. Last October I befriended a 53-year old who died a week later. The man that took his place was at one point a successful investor who traded thousands of stocks…but the only one he ever mentioned to me was 5th 3rd Bank…..in January, I must have gone to 30 Fifth Third banks looking for answers. There’s lots more…so what’s my point? Guess what is the number of the protein series that is our tumor suppressor protein…the guardian angel protein…I know….you have no clue what on earth I’m talking about….but some of our readers might….I think some feller above asked a question about religion.
• zn
I have much respect for the mighty RS7, but this would smoke the rings off that baby all day. My grand hope is that Audi follows chase on Tesla and throws out an RS7 electric that does justice to fine German engineering.
• J_JamesM
What does it matter that the designer doesn’t believe in God? Is that a serious consideration for you people? I sure wouldn’t want to live like that…
• Arthur
Lot esier for satan to tempt a non-believer…young feller…one thing’s fer sure…..that madman Musk is goin straight to hell once he’s done caterin to the evil one…aint no matter if hes on the Earth or on some forsakin red planet…..gonna give barrier tunnelin a whole new meanin.
• Perttu Lehtinen
Sounds rational.
• Deep Time
He said “boobs.” Heh heh.
• MisterEman
I have a P85 and didn’t think I’d notice the difference with the P85D. Last week, I test drove a “D” while taking a Tesla factory tour in Fremont (have to schedule it ahead of time). Boy, was I wrong! For a split second I couldn’t catch my breath. A tinge of panic grabbed me before I realized I was still in control. The 4 tires grabbed the pavement and threw the planet behind me violently. What a rush! The difference between 0-60 in 3.2 seconds vs. 4.2 seconds may not sound like much, but it is a 25% increase. I understand that with the latest software upgrade on 4/13, it’s down to 2.9-3.0 seconds. If it gets any lower, people may lose consciousness. Wow!
• Arthur
Now hold on there young feller, what I want to know is if that 5000 pound combination of a battery made in Japan and electric motors on skinny wheels turns corners….or does it just go in a straight line. My Golf TDi has a shorter stopping distance, so I’m gonna be very careful stepping on my brakes if that non-green machine of yours is tailgating me on the Wilbur Cross cause it can’t deal with the turns.
• MisterEman
Yes, Grandpa, it does hold the ground. Those 7,000 batteries are in the floor pan below the axles. That makes its CG so low you couldn’t tip it over if you tried. Very flat around corners. NHTSA had to bring in a forklift to tip their Model S over during their safety testing – driving wouldn’t do it. And your shorter stopping distance, if true, is no doubt due to the fact that your TDi can’t get above 45 mph unless it’s going downhill with a good tailwind. Keep on with that smelly beetle bomb of yours and giving your money to Shell Oil. And I’ll try to “keep out of your yard!” 🙂
• Arthur
I think you’re the first feller that has not complained about the lousy suspension of the TESLA. Lousy suspension + skinny tires + heavy weight = bad cornering ability in my book…unless that there battery pack on wheels contraption aint need to follow laws of Sir Isaac Newton.
As far as my Fahrvergnügen VW that is greener than any EV out there, what it lacks in horsepower it makes up for in good ol fashion torque. Fact is young man, my VW TDi has more torque per kilogram than many TESLAs, and it’s much better for the environment.
Now as far as all you lyin about your EV being zeromissions and all that, I say horsefeces (different word comes to mind but I try to be respectful)…I can see your tailpipe from miles away at that ConEd plant.
Now if yer TESLA really got the 90 MPGe considerin burnin LNG at the source, perhaps I would be less critical…but the rear seat design bothers me a helluva lot more.
• MisterEman
There, there, old fart, calm down. Maybe you’re getting your strained peas mixed up with your Milk of Magnesia. It happens. Remember, the only fellers you’re talking to about Teslas are your friends in the nursing home. All the real “outside people” rave about Tesla’s handling and suspension (Motor Trend, Consumer Reports, Autocar, etc.). And check out any scientific study (that’d be fact-based stuff) and you’ll find that EV’s pollute at least half what hybrids do even in 90%+ coal-fired electric power plant states. And since you brought up Newton, here’s something he’d appreciate: a report from the Union of Concerned Scientists on how EV’s are much better for the environment and getting better all the time:
Now, as far as your Fartinengine VW diesel, I can smell it from here, and it’s making me wheeze, you know, like you do when Vanna White walks out on Wheel of Fortune. You obviously have a lot of time on your hands to troll the message boards, and I can tell it means a lot to you to go back and forth like this. But, while it’s been fun, know that this is my last post to you as I have better things to do. And my posts were more for any others looking in (hope you’ve all enjoyed!). Now as for you, Arthur, it’s about nap-nap time, isn’t it? You can troll with your friends later. Bye-Bye!!
• Arthur
Go stuff some ice bags under your PRECIOUS….this way it might just be able to keep up with one of them there vertible Miatas for one lap round the track….but I suppose that’s why those joyriders at Motortrend call it the best and fastest….young feller…cause with enough patience and trips to a local 7-Eleven and Home Depot, it may just be able to do a lap without overheatin. Just make sure young feller that you don’t let the battery drain fully….I understand it then becomes a brick…you will need a new one…but don’t you fret none, those bloggers for the Union of Concerned Scientists will figure out how to turn a reakin and plenty toxic landfill full of batteries into a playground for children….you know back in the day, scientists discussed things like the Schrödinger equation but now everone thinks their some scientists of sorts on account of being able to blog on some inernet site and poke fun at old folks.
Vanna White? Bummer….how on earth did you find out about me and Vanna….just don’t tell my wife….she can get pretty worked up over those kinds of things and then theres all sorts of flyin objects….best my ol lady don’t find out.
• PreserveOurRepublic
MisterEman…it pretty clear you are writing the comments of Arthur as well. Not sure why…perhaps you are a bigot?
• MisterEman
Wow… didn’t see THAT comin’! Yeah, that’s what I like to do all day. Why don’t you go back to studying yer constitooshunal soverin sitizin screed.
Sheesh, who let all these idiots on the web? I remember a time when the Internet was accessible only to thinking people. . GD Mosaic!!
• Arthur
Why that’s some mighty clever master debating MiserEman. When you mutter: “And check out any scientific study (that’d be fact-based stuff) and you’ll find that EV’s pollute at least half what hybrids do even in 90%+ coal-fired electric power plant states.” you are leavin your precious BEVs open for some serious reaming. So you are insinuatin that EVs pollute twice as much as hybrids since “twice” follows yer conditional statement of bein “at least half”.
Futhermore, the betery in them there TESTAs is some forty times some the size of the unit in those queer (means strange young feller) hybrids, so much more pollutants and mutanogens in them there landfills for our kids to contend with, specially the Nikel.
Now as far as yer UCS, I don’t care much for unions…as these lead to socialism…the silencing of individuals. Look what the NAZIs did….socialism just plain sucks…and all forms thereof…it’s anti-religion so it’s not surprisin that socialism encourages human xperimentashon which is epitomised in yer TESTA……now seein how you’re one cantankerous testy feller…I’ll be watin for a response..and don’t waste my time by conterdikin yerself again by the unknowingly usin paronomasia….”pollute at least half” my S.
• MisterEman
News Item: “VW caught cheating on clean diesel emissions tests. CEO resigns.”
(Please lips… don’t unpurse)
• Perttu Lehtinen
I love the characteristics of electric motors. I couldn’t care less about your shitty and noisy TDi with a complex gearbox.
• Arthur
lectric motors are fine when they’re far from the little ones. It’s when you place one that’s over 400 hearsepower next to a child that you’re inviting to be beaten up all the way to the gates of hell…and the evil one knows all too well what I look like.
• WeaponZero
Tesla Model S P85D stops from 60-0 in 104ft
VW Golf TDI stops from 60-0 in 117ft
Aka, Tesla P85D has shorter stopping distance. And yes the P85D goes well on corners, far better than your Golf TDi.
• Arthur
WOW…104 ft for the P85D from 60 to 0…man…the Z28 needs 115 ft.
Is that Motortrend that you’re getting your numbers from? MT reported 70 to 0 mph for P85 as 147 feet….my Panamera Turbo S has larger and higher performance disc brakes ($10G option), less inertia and much better and wider tires..and takes at least 150 feet to stop from 70 mph. Clearly MT is fabricating data, not reporting the truth on the TESLA. As you have stated many times, MT and other publications love the TESLA, so no wonder they are now busy deleting all the 70 to 0 mph breaking distance search results because it is easier to fool folks with 60 to 0 mph fake data. I doubt my Golf can stop faster than the BMW i8. The BMW stops from 60 in 119 ft, and all publications show the BMW beating P85 in stopping distance, and since the P85D has the same size brakes as the P85, the P85D will also take longer to stop than the P85 due to the extra 400 pounds of weight, and thus even longer than the BMW.
I was able to dig deep and find again the 70 to 0 data for my Golf and the Model S from the same publication (C&D): Golf TDi in 170 ft,
Tesla Model S in 174 ft.
Again, my golf will get 45+ MPG on the highway, the most efficient TESLA Model S will struggle to get 23 MPGe once we account for electricity generation, transmission, distribution and even you told me the charger is up to a 12% loss.
And no, Minnesota does not get most of its power from Hydro. In fact, most of it, by far, is from coal…so a TESLA in Minnesota is a worse polluter than a Hummer.
By the way, Edmunds had to have the entire drivetrain replaced in their long term tester Tesla, including the huge battery, after a ridiculously low number of miles. I think MT had to do the same after even fewer miles…and the thing kept overheating on them….so they figured just put on as many miles as possible with the wheels off the ground to avoid more embarrassment for their PRECIOUS…so I guess that was MT’s first driverless long term test ever.
So all the automobile manufacturers should design cars to have their entire drivetrains fail after 15,000 miles if they want their cars to be called the best ever by Edmunds and MT.
More good stuff…..Edmunds did a stunt where they drove the TESLA cross country…but they needed a Suburban as a shadow vehicle because the drivers in the rotation REFUSED to sit in the back seat of the TESLA for some reason. BUT YOU WANT TO PUT KIDS IN THE BACK SEAT AND GIVE THEM CANCER AND YOUR EXCUSE IS THAT BENZENE CAUSES CANCER SO IT IS OK TO EXPERIMENT ON KIDS WITH 400 HP ELECTRIC MOTORS AND NOT TELL THE PARENTS OF THE RISKS FROM LONG TERM EXPOSURE DUE TO ELECTRIC MOTORS WITHIN INCHES OF THEIR CHILDREN’S BODIES AND NO EFFECTIVE SHIELDING. By the way, why did Bart Barton call you an ISIS sympathizer? Is that your strategy….to convince Americans to kill off their kids in the back seat of a TESLA?
It’s May 3rd already…tell Mr. Musk that I’m done with him. The irony of all this is that he will need me for the next part of this war a helluva lot more than I will need him. My audience will not be impressed if I keep practicing on an easy target and show weakness by dwelling on some success. My people deserve someone who always ups the ante, someone who goes after a greater challenge as soon as it is within grasp. I will not disappoint them.
• WeaponZero
The reason why it is hard to find 70 to 0 data is because no one tests for it. Everyone uses 60 to 0 as the metric. I have never seen motortrends use 70 to 0. Maybe you are confusing them for someone else? Car and driver sometimes reports on 70 to 0, but motortrend, never did.
The Model S with 174ft from 70 you are comparing to is the 60kwh. Not the P85.The i8 has a stopping distance from 60 of 108ft, P85 is 113ft.
The P85D does have new brakes. Not the same as the P85. I quote:
“The brakes bring big news, too. Rather than use a vacuum brake booster, Tesla uses an electromechanical brake setup. The feeling under your foot comes from the resistance of a spring and an electric motor. Tesla VP of vehicle engineering Chris Porrit says it’s like a steering rack on its side. The Porsche 918 is the only other production car using this system. The arrangement gives Tesla great flexibility with the automatic brakes in autopilot mode. The car can call for high-g braking in panic stops or gentle, chauffeur-style slowdowns. Concerned about brake feel? Tesla can tune it.”
If you want to compare the MPG of a Tesla Model S vs a Golf and account for electricity generation, distribution loss and charger loss. You also have to count the refinement loss, distribution loss, fill up loss, storage loss and etc. You know, to have an equal calculation.
And where did I say minnesota gets most of its power from hydro? MN is aboout 46% coal, so while coal is the largest in MN, it does not make up the most. Even if it was 100% coal, it would still be less than a hummer.
Edmunds and MT both had early production models, that said most of their issues were noise related, not failure. Imagine you replaced a gas engine every time it made noise. After Tesla investigated, the culprit was a lose wire.
And no Edmunds did not have a shadow driver Suburban, I think you are confusing Edmunds for someone else.
• Arthur
The Porsche 918 is a Hybrid, hence the need for special brake feel in order to better “blend in regenerative capability”. Without regeneration, the 918 would have even shorter stopping distances. What you have mentioned has nothing to do with the ability of the calipers to grip the discs.
For comparison, the 911 GT3 has a 70-0 stopping distance of 135′ (Car and Driver). The 918 with bigger ceramic brakes has 70-0 stopping distance of 142′ (Car and Driver). The new 911 GT3 RS will stop even better than the 135′. No mention of TESLA having ceramic breaks. Good luck telling folks than the gravitationally challenged P85 has a 70-0 stopping distance of 147′ when my lighter Panamera Turbo S with bigger brakes needs 150′. Oh yeah, mine are also ceramic. Oh yeah, mine are six piston up front, P85 has four. Oh yeah, my tires are also wider and lower profile…..But, you will say that 2+2=5 and all the oil haters will agree with you and the lies spewed out by MT.
Even Consumer Reports, a publication that we both agreed loves the PRECIOUS, lists a 60 to 0 stopping distance of 116 feet for a plain Panamera S (with skinnier tires and smaller brakes than mine and definitely no ceramic composite discs). Same publication (Best and Worst New Cars 2014), SAME PAGE (p 188), lists a 60 to 0 stopping distance of 128 feet for a P85. Again, P85D is heavier with no advantages in gripping the discs over the P85, so distance will be more. Now, 174 feet for 70 to 0 seems to make more sense for the overweight P85D. By the way, same CR magazine calls the TESLA their top-rated car ever.
Thanks for acknowledging that my diesel VW stops better than some Tesla variants. Also, R&T reported 0.86 g on the skidpad for a Model S back in 2013 (probably P60). I have not found a value less than this for a 2015 Golf TDi anywhere. P85D is reported as having 0.91 g. My other un-modified Golf pulls 0.94 g and stops from 70 in 157 feet….both easily beat the P85D (unless you trust the 147 ft MT reports for the P85). Your P85D might just be able to keep up with my Type R on the track, that is until the battery starts to overheat (which usually means after a half a lap)….or perhaps there will be a lose wire. Oh yeah, I forgot, MT and others do not care any more about anything other than 0 to 60 time, or 0 to 30 in case the Lord of the Rings shows up with 110 Octane fuel.
TESLA shows same disc size for P85D as for other variants, so it will have a longer stopping distance than the P85 since it is the behemoth of the bunch. The feel of the breaks will not enhance maximum performance. TESLA website also does not mention the number of pistons, so I am assuming the TESLA bloggers are not lying when they state only four pistons up front.
Performance minded enthusiasts care about stopping distance from speeds greater than 60 MPH. Road and Track even lists 80 – 0 distance. If a car stops well from 80 or 70, it will surely stop well from 60. The opposite is not true due to heat build-up, especially when you’ve got over 5000 pounds to slow down. Also, shorter stopping distances from higher speeds matter most in accident avoidance.
Regarding the backup drivers in the Suburban for the TESLA cross country trip, the following is from Jalopnik: “Last summer the car gurus at Edmunds.com took advantage of Tesla Motors’ ever-growing Supercharger network to drive coast-to-coast in a Model S in 67 hours and 21 minutes, record time for an electric vehicle. Carl Reese, of Santa Clarita, California, thought he and his friends could smash that record…. Reese and his fiancee Deena Mastracci sped across the country in their own red sticker-covered P85D with…..team of three more friends in a rented Chevrolet Suburban backing them up as timekeepers and support drivers.” OK…fine…after reading further they were not Edmunds, but thanks for helping me prove that neither the two Edmunds drivers nor these other cannonballers dared to have anyone sit in the back seat….I think we had the Melatonin discussion before and how long term exposure to powerful electro-magnetic fields affects various chemical processes in our bodies, even more so for children.
Regarding “Edmunds and MT both had early production models” and “culprit was a lose wire”…I did some more research…..WOWZERS!!! The Edmunds tester had to have the drivetrain replaced four times in one year!!! There is no mention of a lose wire….perhaps there was some bird poop on the drivetrain and we all know Mr. Musk is a perfectionist. Also, both MT and Edmunds took delivery more than a year after the first deliveries started and both were 2013 models…first model year was 2012..c’mon man!!!! Really? Both were early production models? So much for the “perfectionist” quality.
Bart Barton stated you are from Minnesota, so excuse me if that is not correct. Back in December, you stated on my second favorite politician’s business news site that where you live it’s 95% Hydro, which is not possible for Minnesota or any other state for that matter. Also, you did not seem to oppose Bart’s opinion of you being an ISIS sympathizer, so I’m assuming he is correct.
Here is Jan 2015 data for Minnesota: Coal: 2600 GWh, Nuclear: 1200 GWh, Gas: 300 GWh, Hydro: 50 GWh, Other: 1000 GWh. So TOTAL for January: 5150 GWh, this places coal at over 50%.
Regarding your statement that TESLA powered by 100% coal plant and charged in a typical home is better for environment than a Hummer…no comment…I don’t want to be rude.
We’ve been through this before….the power plant has to get it’s fossil fuel as well. If you’re going to use the delta for petrol cars as the extra energy used up by the tanker truck to deliver the petrol to the gas station….then here we go again for the tenth time: Tanker truck delivers 10,000 galons of petrol to the gas station and burns 100 gallons of diesel to do so…that’s a 1 % loss. Even if it is a remote gas station and the tanker truck uses 200 gallons for round trip (enough for over 1000 miles of travel), that’s only 2% loss. You stated yourself back in December that just the Tesla charger alone is up to a 12% loss. Also, you are probably assuming that your coal power plant is sitting on a coal mine, or perhaps that your gas power plant is sitting right over the shale that is being fracked. C’mon man!!!
What is important to note is that petrol and diesel is almost never used in power generation (except back-up generators). Thus, a petrol or diesel ICE automobile does not compete with our grid for fuel. Your PRESIOUS does, competing with homes, hospitals, malls, airports, street lights, schools, ports..etc.
I came back to this blog because I wanted to add one more important item, but then I had some spare time so I decided to take the opportunity to address your many questionable statements…I still can’t believe the drivetrain had to be replaced four times in one year in the Edmund’s TESLA tester. But the owners love their PRECIOUS, so they’ll keep reliability issues on the low down. Anyway, I would like to keep my promise from the previous blog and not beat up TESLA any more. Perhaps if you did not state that all those drivetrains had to be replaced on account of a lose wire, I would not go into the discussions above and simply say the following important item:
We started a discussion recently on EMF fields generated by strong DC currents. I want all our readers to believe me when I say that I will leave the 50 to 60 Hz AC EMF fields completely out of future discussions to the best of my ability. EVs generate mostly DC fields, so there is no reason to focus on AC fields in the 50 to 60 Hz range. Our grid has been around for a very long time….EVs are new. You see, I think some engineers got careless with their EV designs because they assumed that they were completely protected by the electric power industry and that anyone raising concerns about their PRECIOUS will have to talk to the power industry about electric power lines first. This is a completely wrong and irrelevant assumption. AC fields in the 50 to 60 Hz range are weak in EVs, so it is not important to bring these into future discussions. This means that the power industry will not be affected by the questions raised by engineers such as myself, engineers who are completely opposed to human experimentation and outraged by it. There is not much left for the human experimenters to hide behind.
• Perttu Lehtinen
Get a life.
• Kenneth Beck
I think you answered your own question on why without ridiculous brakes the tesla still has extremely good stopping power. Regen. Even while braking, the Regen is as high as 80hp of stopping just with the regen. Then any stopping power above and beyond that is brakes. I dont know what your issue is with the Tesla vehicles, but I think you just like to try peeing on everyones convo because you cannot afford it. None of your arguments make any sense. You claim it cant turn well because it is too heavy? Tires grip better with more weight, so that makes no sense. You claim stopping power sucks. but the system is a combination of Regen braking and brakes which offers better stopping ability. Then you claim it is less efficient than everything on the road because of coal power generation, line transmission loss, and charging efficiency? But again, it is an obvious show of ignorance on your part because this has been proven numerous times that even with coal power, Teslas still make half the pollution of normal cars. Reason for that is it takes 3-5kwh of electric power just to refine a gallon of gas. It takes fuel/power to pump up the oil and transport the oil to refineries, then from refinery to gas station, then power to pump the gas into your car, THEN your car only uses around 30% of the gas energy content to actually propel the car.
If those claims weren’t bad enough, you go ahead and claim that it causes cancer???!?! WOW! I guess if electricity and magnetic propulsion causes cancer, you better get rid of your fridge, your computers, laptops, blender, microwaves, alternator in your car, your car battery, your fridge and freezer…cant use a furnace to heat your house…it has an electric motor to move air around….could cause cancer! I cant even say any of this without laughing at how ridiculous that claim is! I could keep poking holes in every claim you keep making, but typing this is getting tiring…and obviously if you are that desperate to come up with something to make people avoid tesla, then even proving you wrong wont make any difference. You’ll just try to fight to the bitter end for some unknown reason.
• Arthur
I’m not fighting for bitter end….but don’t take my word for it…just ask some of my friends…they will tell you that this is just practice…they’re already placing bets on the real fight that I’m training for…and it’s not TESLA….and when my friends place bets, it’s for a bit more than a few thousand bucks.
Now regarding your senseless refusal to accept the laws of physics as these apply to four pot brakes and skinny tires trying to slow down two and a half tons of a battery on wheels that will kill many children from long term exposure:
Road & Track, Zeptember 2015, (Jimmy) Page 98:
Breaking 60-0 MPH:
TESLA Model S P85D: 123 ft
Volkswagen GTI: 116 ft
Lord of the Rings Audi RS7: 111 ft
My Panamera Turbo S with ceramic brakes and made by VW will stop even better than the Golf and should be close to the RS7. Once you start testing from higher speeds (70 MPH, 80 MPH), the PRECIOUS doesn’t have a prayer of even beating a Mini Cooper or a Miata. In fact, my Buick already beats the P85D from 60 MPH (same page R&T…119 ft) and handles better too (0.90g vs PRECIOUS’ 0.89g)
Now what was that about me “peeing on everyone’s convo”?
• Kenneth Beck
Right, because it didnt take you a month to come up with something….and you blantantly ignore regen as braking power. Complete failure on your end as you most likely dont understand it. But hey, keep whining that some specs are SLIGHTLY lower than cars that are more expensive to begin with. I could quote the fact that it is cheaper to run a tesla and virtually no maintenance, but im sure any other fact you’ll ignore because you are so sure of yourself. You most likely havent even test drove a tesla, so your comments mean nothing. Im sure you were smart enough to actually drive both cars before comparing them, right?…oh, my bad, you havent.
• Arthur
Are you knocking futs? You commented on my statement three months after I made it and that is to a four month old article, and you’re crying that I took me a month to reply to your disconnection with reality. Just be grateful I noticed yer sorry S excuse of a comment. The only reason I responded was because I wanted to get a final printout and I noticed some newer comments. Fortunately gas2 website has the balls to keep my comments. Others don’t like reality so they ban me.
Then you say “Complete failure on your end….keep whining that some specs are slightly lower”….do you know what “complete” even means? How does “some specs are slightly lower” support your “complete failure” argument?
You keep conterdikin yerself (a form of asexuel reperduction).
Cheaper does not mean more efficient. Free charging stations does not mean better for the environment.
I don’t ignore facts and I’m not “so sure” of myself. I’m just waiting for all those ex-NASA scientists and engineers working at Spacex to come up with a valid answer…but they can’t…and believe me, it’s No. 1 on their priority list.
Did you ever even sit in the Lord of the Rings?
Also, I think I have listed more facts and statistics here than anyone else….but you will just say that I ignore facts. Here is a suggestion…why don’t you read all the comments first before making an idiot out of yourself….young feller.
• Jonny_K
But was it fast?
• MisterEman
• Pingback: Tesla -- Disruptive or Not? (+ Clean Transport Link Drop) | CleanTechnica()
• zn
Great to see a real piece of writing for once. The reason I read GAS2 and not other green motor sites is because I like the writers. To say it politely, you guys still have your balls. Keep it up.
• Pingback: BMW i3 on Amazon, Tesla P85D Test Drive, Dumb Kia Dealers, Electric Mustang Zombie... (Cleantech Talk #10) −()
• Pingback: Elon Musk Almost Sold Tesla To Google In 2013 | CleanTechnica()
• Pingback: Elon Musk almost sold Tesla to Google in 2013 : Renew Economy()
• Alexandre Dube
Every time I see article like this or comment it make me wonder if people before driving a tesla have drove anything else. My dad bought a P85 D and received it last week… I’ve tried it and I have to say it’s not the end of the world… I always had BMW M3 or AMG and my dad also have an Aston Martin Rapide and he prefer to drive his Aston Martin but like the tech inside the Tesla. We both agreed that it was not the fastest nor the funnier car we’ve drove. It is fast and it is fun but not the end of the world. I still like the big grounding sound of a V8 or a big Turbo while I’m pushing it :P. Also the finish inside the car is not top notch.
• Arthur
Hey there young feller….sounds to me like you wrote this right after your mind got scrambled some after sittin too long and at arms lenght to some seriosly powerfool lectric moters.
• Alexandre Dube
Yes sorry, I’m originally speaking French and German lol
• Arthur
OK there young feller. Bimmers are mighty nice….but my personal preference is PORSCHE…on account of there not being no substitute..and Vanna likes em too….gets her in the mood if yer knows what I mean…and on account of you liking big Turbo or V8..I think you do knows what I mean… Just don’t sit too long in your Papa’s PRECIOUS P85D’s back seat lest you don’t want any little ones in yer future…..that there pair of lectric motors will mutate your gonads if yer sit on em for too long. |
6bf35f161d7a9ea8 | Monday, December 22, 2008
Gregory Chaitin
Anyone who is interested in "The meaning of life the universe and everything" type stuff needs to keep tabs on Gregory Chaitin's work, such as this:
and this
Is God a computer programmer?
I have a feeling this guy lacks enough inhibition to not worry about being a bit of kook. He may frighten the life out of some people! A bit of eccentricity can go a long way in science. Have a look at his splashy web site! It would be wrong to say that Greg is going places, because he is probably already there.
Thursday, December 18, 2008
Quantum Decoherence
Quantum Decoherence looks to be an idea that has a lot going for it. In fact it seems to tie up so many lose ends that I find the notion extremely attractive myself. As an explanation of the apparent sudden and random discontinuous changes of the quantum mechanical state vector decoherence is just so neat. This web site sums up the theoretical attractions of decoherence theory. I have reproduced some of these attractions below (with my additional comments in brackets):
No additional classical concepts are required for a consistent quantum description. (A sharp distinction between macroscopic classical systems and microscopic quantum mechanical systems does not exist)
There are no particles (The universal ontology is a uniform one of waves only. The cosmos doesn't contain any dirty gritty bits, only smooth voluptous waves)
There are no quantum jumps (No probabilistic discontinuous jumps of the state vector)
There is but ONE basic framework for all physical theories: quantum theory (No extra physics is needed to account for quantum jumps; we have the physics already in the form of various wave equations - we just need to apply these equations to the measurement of quantum systems with macroscopic systems)
There is no time at a fundamental level (That is, because all quantum equations are reversible, the cosmos is in principle reversible and time is an artifact of boundary conditions, end of story; in fact end of story telling as well)
Finally the Decoherence web site adds:
It is a direct consequence of the Schrödinger equation, but has nonetheless been essentially overlooked during the first 50 years of quantum theory.
What a deal. It’s hard to resist. No new theory; just the correct and insightful application of quantum equations, an application that’s been overlooked for the last 50 years. The whole thing leads to a seamless, ‘in principle’ smooth and deterministic physics with no need to lash on any ad hoc random jumps of the state vector. On this view the randomness of quantum theory is not absolute but only apparent. It is a product of the entanglement of quantum systems with the chaos of macroscopic objects used to measure quantum phenomena thus leading to the apparent, repeat apparent, random changes in state of microscopic systems.
One question I need to look into is this: What does decoherence theory say about the case of not detecting a particle in a designated state? The failure to detect a particle in a state means that it must be in the orthogonal complementary state, which is in fact a superposition of many states. Can entanglement account for the apparent jump in state associated with not detecting a particle?
Decoherence theory has the touch and feel of a winner, especially as its reduction of explanatory entities is very much in the spirit of Occam’s razor.
However, I have my doubts. I have long noted the analogues between quantum theory and the probability envelopes of random walk and I am now fixated on the idea that probability envelopes of a special quantum kind are incarnated as a “real” world ontology. These analogues suggest that we go the whole hog and expect these envelopes to behave like other probability envelopes when a change in information occurs: that is the envelope “collapses” or at least suddenly changes its form under certain circumstances. I may well be backing the wrong horse, but the reason why I take the application of these analogues seriously is indicated below. In the following I note the parallels between quantum envelopes and conventional probability envelopes. In the following I use ‘real’ probability envelopes and not complex envelopes. So for a state represented by |p) we have |p) = (p| .
If we have two probability envelopes or ‘states’ |p) and |q) each of which pertains to one of two separate (= ‘orthogonal’) coordinates then the state of the composite system is a two dimensional probability envelope that effectively can be represented by the ‘outer product’ |p)|q), as in quantum mechanics proper.
Imagine that we have a particle in a probability state represented by the envelope |p) and we have another probability envelop on the same coordinate which is some kind of detecting ‘field’ or state, |q), that is capable of capturing the particle in state |p). Under these conditions the probability of the particle being captured by the detecting state is equal to the inner product, or ‘intersection’, (p|q) as in quantum mechanics proper.
The algebra of quantum envelopes looks suspiciously like a kind of probability calculus but with real probabilities being replaced by “complex probabilities”.
The foregoing “state algebra” doesn’t produce any dynamics: that can be added with Schrödinger’s equation; as I have suggested in my book this equation has a close relation to the random walk diffusion equation.
To my mind quantum theory is too closely related to random walk and probability calculus to dismiss the notion of real collapses (and discontinuous changes of state). This need not be the Copenhagen type collapse which posits the presence of an observer. In my interpretation of quantum theory, the presence of a “detecting” or “capturing” state is sufficient for a possible collapse or a sudden change of state according to probability. I’ll be frank and admit that I’m expecting the collapses to be real because otherwise I’m confounded by the similarities with probability calculus. I’ll candidly admit that I’m applying an anthropomorphism in expecting the similarities of quantum theory with probability calculus and random walk not to be wasted. For me decoherence is an anticlimax, a solution by those who have either lost the plot or couldn’t see it in the first place; it cuts across my expectation of uncovering a meaningful, coherent story. (Although, of course, decoherence has its own cluster of alluring points as I have indicated above)
These ideas are, of course, highly speculative, kooky and frankly look to be rather dangerous conjectures to back. But then I’ve no reputation to lose. In contrast decoherence is the safe solution, the tidy deterministic solution; it’s the solution that we know in our hearts to be the likely one if we believe the universe to be a relatively prosaic closed system and not open-ended. In my opinion it’s the solution for the boys and not the men. However, if experimental work does skew the evidence toward the decoherence picture then count me out; I’ll have to concede and admit that the world is more boring than I expected!
Which theories we tend to support, need I say, is not merely a function of experimental data, (which in any case is often not a sufficient sample to settle the matter), but also a function of idiosyncrasies in our background, our sense of analogy, our feel for elegance, what we are expecting to see, and even what we are hoping for. Vested interests and group identification also have a role here.These motivational factors have, needless to say, connections with background agendas, world views, hopes and aspirations. I don’t think it is wrong to have these background hopes and views, it’s only human; but it is well to be aware of them and how they are subtly influencing one’s hopes and expectations and how one interprets the data. Do not let these background influences hide in the subconscious. Be prepared to face them, challenge them, change them, and above all never, never, never, be the slave of them and allow them to string you along. If a world view betrays you and fails as an interpretive structure in the face of contra indicators, throw it away as you would a broken tool. Never fall for the fidiest trap.
Wednesday, December 10, 2008
Protecting The Innocent
Prompted by the response I got to my last rather provocative post I thought I would press on and think a little more about the atheist poster campaign. A quick look revealed few details about the thinking behind the campaign other than someone suggesting that it was a light hearted campaign avoiding the unforgivable sin of a preachy didactism. Therein is the rub: how does one promulgate atheism when some of its conclusions suggest that no one should tell anyone else what to believe? The implementation of militant atheism has a consistency problem.
Dividing the population roughly into the three categories of: 1. True believers, 2. True atheists and 3. The rest who have a spectrum of views, then with which of these constituencies does the atheist campaign cut the mustard? Without some feedback it’s a difficult question to answer, but let me hazard that campaigns by either atheists or believers to garner support do best with their neighborhood constituencies; that is, with those who are closest to them in sentiment and thought. From this ‘local’ constituency ‘converts’ to the cause are reeled in and the broad mass of stay at home agnostics are at least encouraged to make sympathetic noises.
Publicity campaigns put out by embattled subcultures maybe less a rallying call to a target constituency than to the subculture itself. By giving that subculture a sense of identity, a sense of purpose, a sense of control, a sense of having the situation in hand, and a sense of destiny fulfillment, a vigorous foray into the world beyond can be a morale booster for a marginalized community and a way of avoiding brooding thoughts. The campaign may also serve as a gesture to disconcert diametrically opposed subcultures with a message of strength, confidence and vitality. Although I am not sure how the atheist campaign went down in its natural constituency, it is in this latter sense, if no other, that the atheist poster campaign has failed. This poster campaign is perceived by many Christians as extremely weak, weak to the point of being a laughing stock. Much of that is down to very deep differences between the world view logic of atheism and Christianity.
As I suggested in my last post strong conviction, vehemence, and above all community vibrancy and purpose are very high up on many Christian’s perception of what constitutes evidence of veracity: that is, for many Christians the existence of a faith community that knows what it believes further encourages faith and thus faith is self reinforcing. (I am critical of using faith to justify faith but that is by the by). What is important to note here is that it reveals why the atheist campaign, with its use of the word ‘probably’, looks so weak to many Christians. In the eyes of many Christians no group with a vibrant community ethos could advertise itself so weakly. If the idea of the campaign is to convey that one shouldn’t be preachy why even bother to preach that? How can such an incoherent message be put out by a vibrant purpose driven community? Ergo, the message Christians are getting is that the community dimension of atheism is bankrupt.
The other thing perceived by vehement Christians is that atheism has nothing to celebrate, no object of celebratory focus. OK so there is no God. Fine. But we need something else to celebrate and to be the focus of our community. What will that something be? Atheist attempts to find a focus for celebration have sometimes gone horribly awry. They have created quasi-religious objects that have been used to oppress such as the Maoist and Stalinist personality cults or fantasies about a social utopia to be ushered in by the triumph of a highly idealised notion of the working class. It is perhaps no surprise that Theravada Buddhism has become popular amongst westerners who reject the notion of God but still hanker to reconnect with something spiritual. But unless one is to become a Buddhist monk this is far too individualistic for the community ethos.
The evangelical Christian cannot think about his/her joys and worries apart from his/her object of celebration and the community in which that celebration takes place. (S)He may not be able to articulate it but instinctively the simplist Christian will see the pathological logic in a slogan that first suggests the object of his/her community celebration doesn't exist, and then tells him/her to stop worrying and enjoy life! What will seem even more perverse is that the whole slogan is conditioned by a mere probably. Not only does that appear inconsistent with the rancor and militancy of some forms of atheism, but to the Christain who finds it difficult to think in terms other than a 100% conviction the message is farcical :"So these atheists are telling us to give up a celebrating community that brings joy and addresses worries merely because they think God probably doesn't exist? Why don't they come and join us? We know there is God, We know He brings joy. We know He shoulders our burden of worry". Isaiah 53:4: "Surely he has borne our griefs and carried our sorrows". The PR people at atheism central have really got their work cut out if they want to compete with this. They're going need all the "probably" they can get.
Finding a rationale for community celebration and its concomitants of purpose and vibrancy is, it seems, the biggest problem for atheism. It’s no good just telling everyone there is probably no God, because when everybody believes there is probably no God what next? This is atheism's major ‘theological question’ a question that parallels the theist’s problem of pain in that both tend to generate subtle and convoluted answers. Nietzsche’s death of God theology lead him to posit his concept of infinite recurrence which enabled him, in spite of the death of God, to escape nihilism by the skin of his teeth and say ‘yes’ to life and could once again celebrate it. But for the man in the street this is unlikely to cut much ice and so atheism continues to teeter on the brink of nihilism’s abyss. A candidly frank atheism has to admit that in the final analysis there is tragedy at the heart of the human condition. Courageously acknowledging this tragedy and having the strength and imagination to face up to it and make the best of it is about as spiritual and hopeful as it gets in atheism. Either that or one adopts a self mocking jocularity that tries not to take the whole thing seriously – such as we see in the atheist poster campaign. As Morpheus said to Neo in the Matrix, atheism only claims to offer the truth. But is it even doing that? Atheism’s difficulties and obscurities over purpose, meaning, epistemology, ontology and above all community ethos provide little grip on the anti-foundationalist slippery slope down into individualism and postmodernism. Little wonder that the poster campaign was so muted.
Atheists like my fellow blogger Larry Moran often liken theism to a belief in Father Christmas. Although I have never admitted it to the good Professor there is in fact a compelling point here. Father Christmas, commercialism apart, is for children a very life affirming character. For many children he contributes to the warm glow and magic of Christmas and therefore provides a focus of celebration and a reason to say ‘yes’ to life. With this parallel in mind it could be plausibly maintained that belief in a kind of a Divine Cosmic Patriarch is one way the human mind copes with and bypasses the social and conceptual difficulties introduced by atheism, difficulties to do with how the mind gets its purchase on reality and conundrums about community purpose. Religion, the opium of the masses, is a way of protecting the innocent from thoughts of a cold dispassionate world out there, knowledge of which threatens to blow the mind. But this theory actually cuts both ways and is also a danger to atheism: it really does suggest that should the God shaped hole be filled, if only with a myth, it can contribute beneficially to a community’s peace of mind. Even when there is no peace between communities driven by different mythological stop gaps, a sense of purpose, hope, social cohesion and destiny is present in opposing communities; that’s why religious wars can be so polarized, fanatical and vicious.
As for myself I was never brought up believing in Santa: my parents always made it clear to me there was no such figure and that it was only a fun game. My mother is a believer and my father would liked to have been a believer but he could never raise the faith. Hence on count one I never faced the disappointment of discovering Santa to be a comfortable lie that readily served as an analogous model that could be ported to religion. On count two I never had to face the social pressures of a community with a self supporting belief. So for me the choice of atheism or theism was always a choice, always a matter of investigation, exploration, seeking, pilgrimage and a quest to find the primary explanatory object that sources the cosmos.
I have come across Christians who were once true atheists and who have become as convinced of their Christianity as they once were of their atheism. These are the sort of people who don’t do things by halves and champion their latest cause with almost sanguinary zeal. It is surely significant that the ex-atheists I have met interpret positive affirmation and strong conviction as a sign of integrity and may criticize anything less as lacking in authenticity. Conversely I suspect you will find true believers who have swapped to true atheism who are as all-out for their atheism as they were for their Christianity (Jonathan Edwards?). Some Christian zealots admire the sheer conviction of the true atheists, perhaps sensing a deep kinship. As one true believer said in a comment probably directed at myself: “Our atheist friends … show more conviction than most believers, what has happened?”
It is one of my many pet theories that at the opposite ends of the belief spectrum many atheists and believers have telling commonalities in their mindsets: the ontology of some versions of atheism looks suspiciously like an inverted version of Gnosticism; the Gnostic believes salvation comes when sublime particles of spirit are freed from the corruptions of profane matter. For the atheist it’s the other way round: secular salvation comes when reactionary and residual superstitions about the supernatural haunting the interstices of matter are exorcised with profane reason. Both parties see the cosmos through an implicit dualism that divides the cosmos into configurations of insentient gritty matter pervaded by a mystical ‘supernatural’ spiritual world. Whilst the atheist by definition declares the epistemological intractability of the latter to be tantamount to nonexistence, he may yet retain the dualist’s notion of a gritty insentient matter.
Dualism’s sharp distinction between the two categories of materialism and spiritualism cries out for the latter’s immaterial existence to be challenged. But although the single category of a one-substance ontology is elegant it too provides no guarantee against epistemological intractability. Conventional science currently creates its explanatory structures from two classes of object: 1. Mathematical laws of relative algorithmic simplicity (This covers chaos as well as the non-chaotic) or 2. Configurations of high disorder that admit statistical description. Both of these objects are mathematically tractable from a human point of view*. However, in the infinite region between the high order of elementary algorithms and the monotonous complexity of maximum disorder there are undoubtedly mathematical objects of unspeakable complexity and size that are well beyond the capability of the human mind to handle. It’s no surprise then that we are not using them as explanatory structures. If such exotic objects should be the deeper explanation for the cosmos their mathematical intractability would also imply an epistemological intractability. However, some people might advise us that as there are probably no such objects, we should stop worrying about it and have a happy Christmas. Disbelief, as well as belief, is also a way of protecting the innocent.
* Footnote.
At one level high disorder actually betrays the existence of epistemic intractability: hence the use of probability.
Thursday, December 04, 2008
Probably the Worst Poster Campaign in the World
As I have already suggested the atheist Bus ad campaign has somewhat played into the hands of the Christian community. The December edition of Christianity magazine reports on the clash between a committed life and self affirming philosophy and the inevitable non-committal nihilism of atheism as follows:
Christian Thinktank Theos made a £50 donation to the campaign. Paul Wooley, director of Theos said “We donated the money because the campaign is a brilliant way to get people thinking about God. The poster is very weak – where does ‘probably’ come from? (Editor: I told you they would probably laugh at ‘probably’!). Richard Dawkins doesn’t ‘probably’ believe there is no God. And telling people to stop worrying is hardly going to comfort those who are concerned about losing jobs or homes in the recession, but the posters will still prompt people to think about life’s big questions. Campaigns like this demonstrate how active atheists are often great adverts for Christianity.
Rev Jenny Ellis, spirituality and discipleship officer said “We are grateful to Richard for his continued interest in God and for encouraging people to think about these issues. This campaign will be a good thing if it gets people to engage with the deepest questions of life”.
Like the probabilistic agitations of quantum mechanics which abhor utter emptiness, the restless human psyche probably cannot unthink the God concept and therefore God, if he probably doesn’t exist, is conspicuous by His apparent probable absence. The true atheists are those who are utterly unconscious of the putatively probable absence of God, as perhaps animals are. Likewise we aren’t aware of the blind spots in our eyes because there are simply no neurons in those spots to complain about the absence of input and therefore there is no consciousness of the retinal hole. Christians will therefore welcome a group of people who are so conscious of the cosmic sized “God shaped hole” that they shout loudly about its probableness from the sides of buses traveling around London! No wonder Christians are not merely probably financing the project but have actually put some money in! Hahahahahaha!
My advice to all good atheists is: get religion and then you can really get in there and start exposing the irrationalities of religion from the inside. Christianity and religion in general, is a self-affirming crowd phenomenon where belief, commitment and vibrancy are their own evidences. Atheism by definition cannot attempt to emulate this. As the Christians of old said “We can out think you, we can out live you and we can out die you!” The atheists probably can't do that!
Thursday, November 27, 2008
The Ghost and The Machine
James Knight, the Network Norwich columnist, asked me the following question. Posted below it is my reply.
Do you think the cosmos is platonic just in the mathematical sense or in another way too? I'm just making sure that when we speak of the cosmos we are both using platonic in the same terms. How are you using it?
Hi James,
The following answer to your question impinges upon some issues that I have been pondering for years: in particular why is that in our culture the “irreducible intuitive” is so often pitted against the “reducibility of mechanism?” This theme I see in almost everything: from H. G. Well’s “The Time Machine” where the Eloi are pitted against Morlocks, through the Cartesian ghost in the machine and ‘left brain’ versus ‘right brain’ traits, to charismatic verses non-charismatic. This seemingly irreconcilable dichotomy has now consumed my theoretical deliberations for many years and constantly makes unexpected appearances in my writings (See this link for example ). Here is my attempt to address this issue. It is in fact a very pressing matter because for many years I have been very alienated from evangelicalism. I have put that down to a swing in mainstream evangelicalism toward ‘right brain’ faith expressions and this has become the de-facto version of Christianity in some quarters. There has, in my view, been a consequent loss in authenticity and this has threatened my faith. So, the stakes here are very, very high and I find myself defending my faith from other people with faith. Paradoxically I don’t find atheists anywhere near as threatening!
The short answer to the question is: I use the word ‘platonic’ to refer to the world of mathematical constructions and models. These constructions are explored with the likes of number theory, geometry, set theory, computational theory etc. The most salient feature of this world is its debatable ontological status; it seems to be a world of possibility rather than actuality. Many (if not all?) objects in the cosmos can be modeled using a subset of platonic mathematical constructions isomorphic with them. Cosmic objects are platonic in as much as they may be isomorphic with mathematical objects. A much longer answer is probably necessary when one realizes that there are some tough conundrums here.
At first sight the fundamentals of mathematics are disarmingly minimal, undemanding of an elaborate physics to host them. Take for example the Turing machine: it seems to ask for little more than two discrete sequences (the tape and procedural steps) and a state transition diagram (=software). From this simple model the whole of mathematics seems to open up. Many versions of material reality could host such a simple machine and its computational equivalents and therefore there seems no lack of clarity or mystery in trying to conceive mathematics: it is devoid of that ‘right brain’ mystique; it is, seemingly, the progeny of the ‘left brain’, a paragon of mechanism.
But the self-referencing intricacies and enigmas come in thick and fast once we get reflexive. For a start if we allow the Turing machine to analyse its own mathematics (meta-mathematics) up pops Gödel’s theorem and Turing’s halting theorem. Also there is this question: Is the concept of mathematics intelligible without at least a minimal physical world able to host the mechanical reifications of its computations? Is there truly an independent platonic world that mathematics inhabits irrespective of the existence of a material ontology? And where does the mind fit in all this? Is the activity of an apparently ‘mindless’ mechanism of elementals, such as we find, in a Turing machine, the essence of mathematics? Or does mathematics only exist by virtue of a preexisting mind that can conceive it?
Mathematics appears to transcend a particular material instantiation of its objects whether that instantiation is a Turing machine or some other model of computation. The objects of mathematics can in principle be instantiated on a variety of media ranging from Searle’s beer cans to silicon chips. Therefore the essence of mathematics is to be found over and above material instantiation. Mathematics is about abstraction from material reification; it is about classes of activity and pattern and these things are not necessarily tied to a particular substantive realization. Abstraction, class and pattern are intelligible only as pure concepts inside an up and running mental context which can then handle.
These are difficult issues, but for a theist their resolution is likely to be bound up with the concept of Divine Aseity (see also your concept of absolute reason) Like you I favour the view that mathematics betrays the a-priori and primary place of mind; chiefly God’s mind. The alternative view is that gritty material elementals are the primary a-priori ontology and constitute the foundation of the cosmos and mathematics. But elementalism has no chance of satisfying the requirement of self explanation as the following consideration suggests: what is the most elementary elemental we can imagine? It would be an entity that could be described with a single bit of information. But a single bit of information has no degree of freedom and no chance that it could contain computations complex enough to be construed as self explanation. A single bit of information would simply have to be accepted as a brute fact. Aseity is therefore not to be found in an elemental ontology; elementals are just too simple.
In the search for Aseity elementalisation leads to an ontological dead end because elementals have a lower limit complexity of one bit, a limit beyond which there is no further room for logical maneuvering that could resemble anything close to self explanation. In contrast complexity has no upper limit and hence if Aseity is to be found at all, it must reside at the high end of logical complexity, perhaps at infinite measures of complexity with some kind of reflexive self affirming properties, such as we find in your “there is one true fact” example.
Like you James I’m attracted to Berklian idealism and/or a phenomenological philosophy, because taking sentient complexity as the fundamental given seems to provide a better chance of solving the philosophical conundrums over the nature of the mathematical abstractions, self explanation, and consciousness. However, I can find no necessary objection to the idea that sentience, particularly Divine sentience, may be able to engage in some kind of mathematically reductive self description; but in doing so such description would be no more than sentience describing itself in terms of its own ontology; something similar happens when a programming language is used to write its own compiler. As you say “…..personality is not something that we can turn on itself and identify outside of the layering we put in. I think personality is too big for such isolated imputations.”, If I understand that correctly then yes, personality cannot be described with something beyond itself; but in the final analysis personality, particularly God’s personality, may be big enough to cope with its self description in terms of its own ‘substance’. So sentience is at once both reducible and irreducible. Reducible because it may be mathematically reducible, but irreducible in that it cannot be reduced to an ontology other than itself. This may help satisfy the twin but seemingly contradictory intuitions of the reducibility and irreducibility of sentience.
There is often distaste for the idea that somehow reductive descriptions of sentience are possible. This distaste may result because our self-conscious first person ontology, something which is very sacred to each of us, is trivialized if a reductive description of sentience is used as Trojan horse to smuggle in a profane materialist ontology. It is one thing to attempt a reductive description of sentience in terms of the cognitive artifacts of sentience, but it is entirely another to surreptitiously swap a first person ontology for an elemental materialist ontology whilst attempting to carry out this reduction. A descriptive reduction is an entirely different thing from an ontological reduction. In any case I would question the intelligibility of the whole notion of a gritty “material” cosmos “out there”: if I am right then the fundamental particles of the cosmos are not solid little quarks or strings but cognita. Quarks and strings demand a complex mathematical context to be intelligible. The philosophical problems in this area seem to result of an attempt to relate incommensurables; mind and matter. My own opinion is that one or the other has to go and since ‘material’ noumena are far less real than the first person experience it is the former that has to go.
I have always had grave doubts about the intelligibility of an ontology of “material” elementa pictured to be lurking out there somewhere beyond sentience. It is surely ironic that many Christians are at one with many atheists in picturing such a conception. It is ironic that the default Christian folk philosophy is that of a “materialism plus” ontology – that is, a basic off the peg materialist ontology is supplemented with a “spiritual world” of demons, sprites, angels and of course God himself, all of which haunt the interstices of our gritty earthly reality in the manner that the human “spirit” is supposed to haunt the human body: the ghost in the machine. This is of course Cartesian dualism. For theists dualism actually leads to a tripartite reality: 1. God. 2. The Spiritual World 3. Matter. It all smacks of the classical Gnostic view of particles of spirit somehow trapped in a profane material world, a world that owes its creation to a demiurge; after all, the feeling goes, how could a perfect spiritual God have anything to do with a world of grimy matter? In the light of this default philosophy it is no surprise that Christians across the board are so utterly alienated from their world and are retreating into the mysteries of the “right brain” and the mysteries of the inner self where the unaccountable machinations of intuition replace mechanism. There follows the great Christian cop out from having to account for itself by simply declaring “It’s all in the heart”. In its more extreme expressions salvation for the Christian Gnostic is an escape from the ‘evil’ material world through states of altered consciousness.
Christian dualists are never far from atheism; they hover on the abyss of atheism. If for some reason their concept of a haunted reality should betray them and they react against it, they find a profane materialism purged of sacredness ready to welcome them. And the betrayals do happen: crises in leadership, failure of their religious paradigm to materialize in the form prophecies, blessings, healings and revivals, and the whole creaking show patched up by a bullying authoritarian leadership, well versed in spiritual spin.
I don’t accept a three substance or even a two substance cosmos: I am striving for an integrated vision, not the horribly fragmented vision of contemporary Christian Gnosticism that has lead to the incompatibilities between heart and mind, right and left brain, intuitive Christians and analytical Christians. But an integrated one substance vision is not the same as pantheism. Ultimately what distinguishes substances apart is differences in logical configuration, and configuration is about pattern, abstraction and classification and therefore about mathematics and therefore about mind. And so a one substance vision when looked at more closely is capable of resolving itself into a multi-category, multi-substance vision.
There is, in fact, one very fundamental category division to be found in this one substance vision. We are patterns of mind stuff, but of an entirely different genus to God himself. We and our cosmic context seem to be in that part of the mathematical spectrum that counts as mere possibility: we are too simple as logical constructions to possess the property of Aseity. Our patterns of sentience have no necessary existence and it is this that distinguishes us sharply from the substance of Deity and Aseity. So in one sense the greater cosmos is composed of two very different substances: God, the sentience that necessarily exists, and everything else created ex-nihilo and sustained at His pleasure.
Monday, November 17, 2008
Heiddeggerian Artificial Intelligence Part 2
After reading Hubert L Dreyfus paper on Heiddeggerian Artificial Intelligence I was left with many impressions and thoughts, but amidst it all I had the feeling that he is onto something. Dreyfus is a philosopher and is thus inclined to speak in very general, abstract and impressionistic terms. Take this key comment by Dreyfus, for example:
Rather, acting is experienced as a steady flow of skillful activity in response to one's sense of the situation Part of that experience is a sense that when one's situation deviates from some optimal body-environment gestalt, one's activity takes one closer to that optimum and thereby relieves the "tension" of the deviation. One does not need to know what that optimum is in order to move towards it. One's body is simply solicited by the situation [the gradient of the situation’s reward] to lower the tension. Minimum tension is correlated with achieving an optimal grip.
I think I understand that: The goals one aims for are not literally envisaged by our minds but are implicit in that one is less aware of goals than how one is supposed to move toward them. The situations confronting us cause a response but not necessarily by way of an internalized model that allows us to envisage the goal of that response. However, if Dreyfus’ ideas are to be realized in hardware and software how does one reduce terms like “optimal body-environment gestalt”, “tension”, “soliciting” “optimal grip” to bits, bytes and “if then elses”?
From the outset I was immediately attracted to Dreyfus phenomenology: phenomenology is a philosophy founded in the realization that our experience of the world and our thoughts about it are to all intents and purposes the extent of our cosmos. This existential philosophy side steps the horrendously intractable Cartesian problem surrounding the ontological distinction (if any) of noumena and cognita by positing conscious cognition as the effective center of our cosmos. From Dreyfus phenomenology follows his starting observation:
[T] he meaningful objects ... among which we live are not a model of the world stored in our mind or brain; they are the world itself.
Dreyfus starting point makes for a huge economy in his version of AI: A complex model of the world does not need to be carried around in our heads when in fact our experience of that world, delivered to our brains by our senses, will probably serve far better: all we are asked to do is to react to that perceived model and not to exhaustively envisage it. The real world is in effect our 'core memory' and we are but the depository of the neural 'algorithm' that tells as how to react to the contents of that 'memory' as we move around our world. In particular, if the world itself is our model then we don’t have to internalize a model of how it reacts to our actions. In principle our actions could have vast and ramifying effects on the rest of the world and a comprehensive internal model of reality would have to include the logic required to model these knock on effects; the problem of trying to somehow cater for the possibility of these escalating effects is called the “frame problem”.
Dreyfus looks to be a fairly abrasive character and uses general terms that can be very slippery. It is easy to misinterpret him and anyone who tries to use Dreyfus' ideas and turn them into something workable is probably taking a similar risk to those who attempt to articulate the meaning of the Holy Trinity and open themselves up to charges of heresy. Dreyfus is a hard task master. He is AI's prophet of doom and as is the prerogative of prophets of the infinitely complex he tends to work apophatically; that is, he is much clearer about what human intelligence is not, rather than what it actually is. Perhaps this is a good thing because there has been so much hype and over optimism in AI that it cries out for a judgmental preacher. It is very difficult to do justice to the infinitely complex and the scientific equivalent of charges of blasphemy and idolatry as humans attempt to create images of our their own selves reminds us not to be too complacent about progress in the face of simplistic and unrepresentative models. Did I just say ‘model’? Aren’t they the things that Dreyfus says we shouldn’t be using?
My own guess is that human intelligence solves the frame problem in a plurality of ways. The ‘absorbed coping’ that Dreyfus talks about may well be found in intelligent organisms like ourselves; his view is that such organisms are dynamic systems coupled to their environment via stimuli which are not processed using representations and models, but these stimuli succeed in ‘soliciting’ the right responses without the use of representations and models. It is likely that humans have inherited this computationally economic modus operandi. And yet it seems to me that humans also appear to model the world computationally in the internal Cartesian sense: Humans can and do reflect on ‘external systems’ and can anticipate their behavior without coming into contact with them and being prompted by them. However, often this reflection may make use of pencil and paper jottings and various external contrivances that help prompt thinking, thus betraying the roots of human intelligence in organisms coupled dynamically to their environment. If the human mind does do symbolic modeling it may not actually be very good at it as a standalone system.
There is one other characteristic of the human mind suggesting that “Dreyfus is right but....”. In a connectionist model of the mind, everything is connected to everything else through pathways that may be no longer than ~ Log(N), where N is the number of neurons in the brain. Thus when attention is focused on one activity like say language translation, the whole of the mind’s accumulated connectionist experience is never far away in access terms and thus the whole domain of an individuals experience can be brought to bear on a problem. The mind has the potential to use the widest frame available to it and there may be no artificial frame or relevance boundary arbitarily drawn within the domain of one’s experience. In short the human mind may not even attempt to solve the frame and relevance problems and instead throws all its knowledge resources at the situations it meets. Learning never stops and the human mind is therefore always placing contexts within new contexts. Problem solving truly is an open ended activity.
Human intelligence uses three modus operandi:
1. Heiddeggerian: Human intelligence uses the world itself as its own model.
2. Modelling: Human intelligence has a mental facility to model, but that facility betrays an inheritance from 'Heiddeggerian' organisms by making frequent use of 'external' world contrivances such as pencil and paper.
3. Isotropy: Human intelligence, via connectionism, does not attempt to impose any a-priori limitations on what knowledge resources are relevant to a situation. That is, it doesn't attempt to solve the frame problem.
Wednesday, November 12, 2008
Heideggerian Artificial Intelligence
Here is a link to a paper on Artificial Intelligence sent to me by Stuart. I’m still absorbing the contents of this paper, but in the meantime here are a few lines of hot mail conversation that I had with Stuart whilst I was still reading this paper.
Timothy says:
I’m hoping to do a blog entry on it (the paper)
Stuart says:
Yes. It is an interesting paper
Timothy says:
Stuart says:
Better towards the end when he discusses Freeman(?)'s stuff
Timothy says:
I'm on that bit now
Stuart says:
Yeah. AI has been a bit of a failure. Which is why the phenomenological perspective in HCI and AI has been very important
Timothy says:
I agree. AI was even starting to look a failure as far back as the seventies. In the 60s I swallowed the message that we would have HAL machines by the end of the millennium, but then I has only 14
Stuart says:
Haha. Yes. You can be forgiven. I think Dreyfus is a bit harsh but unfortunately he's right
Timothy says:
Looks like it (Editors note: well, we shall see!)
Stuart says:
Even simple things like computer vision end up facing the solve-all-AI problem. Divide-and-conquer research strategy doesn't work. And perhaps we can argue that this is present in other disciplines
Timothy says:
Yes everything taps into a myriad associations. Similar with language translation: you can't translate everything without an enormous cultural knowledge.
Stuart says:
Well indeed
I must admit that Stuart’s comment about the divide and conquer strategy failing gave me a slight attack of the jitters: Does it mean that an incremental evolution can’t evolve intelligence in a piece meal, step by step fashion? ID here we come? In fact what of our own ability to solve problems given that we have a limited quantum of intelligence? It may well be true that certain problems are insoluble given a limited 'step size' whether that step size is limited by random walk or by human capabilities. However whether or not it is possible to solve any problems at all depends on the existence or otherwise of those “isobaric” lines of functionality conjectured to run continuously through morphospace. In the case of biological evolution those lines must be lines of self sustaining (=stable) funtionality.
Monday, October 27, 2008
Bus to Nowhere
In this article in the Spectator, author Melanie Philips quotes Richard Dawkins saying “A serious case could be made for a deistic God”. Dawkins made this statement at the beginning of his second debate with Oxford Mathematics professor John Lennox. Reading between the lines it seems that this was a defensive reaction to the first debate (which I haven’t seen) where the uncompromising Dawkins came head to head with someone who has a strong grasp of philosophy and science and who would very likely expose the loopholes in absolute atheism. Not surprisingly in the second debate Dawkins was cagier about his absolute atheism and instead went on to attack the much more vulnerable specifics of faith. When faced with someone like John Lennox the turkey shoot is over for Richard Dawkins.
However, Dawkins statement sits well with the atheist bus poster campaign projected for the New Year which uses a slogan reading: "There's probably no God. Now stop worrying and enjoy your life". “Probably” no God? I read subtext as: “We might just possibly have got it wrong, but we don’t think there is a God”. The redneck fundamentalists, for whom the very certainty of belief and experience is the clinching exhibit-A evidence for God’s existence, will laugh that one to scorn! I almost feel sorry for the atheists who by definition can’t match the fundamentalists in convinced fervor! Zealous evangelism and atheism simply don't mix!They are not going to be able to take on the vehement believers by playing them at their own game: if anything this rather lame slogan campaign will play into the hands of those believers for whom uncompromising conviction is evidence of veracity. Even the relatively moderate Methodists are welcoming the atheist campaign as it least putting the concept of God on the agenda; once one entertains the God concept, even just as a remote possibility, one is half way to faith - all that remains is for the “God meme” to be switched on! These atheists haven't got the foggiest idea about how the religious mentality works, least of all that of the fundamentalists.
The trouble with atheism, or at least a fair minded and reasonable atheism, is that in the final analysis it is a self referencing conceptual object that doesn’t allow certainties including itself. It’s a bit like the ground state in quantum theory. Quantum mechanics prohibits the absolute emptiness of nothing: on either side of nothing there is cloud of matter and antimatter! For absolute atheism the only way is up. The tentativeness of human knowledge demands that one can’t declare that God doesn’t exist, only that he probably doesn’t exist. Absolute atheism has only one state: complete disbelief. Theism on the other hand has many states. If you are an absolute atheist there is nowhere else to go other than into one of the various states of belief … or perhaps a superposition of several states of belief!
Sunday, October 19, 2008
Absentee Diety
Here is an intriguing phenomenon: two atheists have popped up in a discussion thread on the Christian web site Network Norwich, one of whom created the thread and dubbed it “Religious Twaddle”. He then went straight in with all guns blazing declaring the backwardness and irrationality of religion. Not surprisingly he got a rather visceral response! Depending on how or if they respond to my own rather late-in-the-day input, it appears that we have here two atheists who, like many theists, see the question of God’s existence swinging very much on the subject of Divine "intervention": That is, these atheists believe there is no evidence of such "intervention" and that in any case the "proven laws of science" obviate any intervention or involvement by God. Hence, it’s then down to the "interventionist" believers (and there are plenty about) to provide evidence that God does intervene every now and again and punctuates reality with "supernatural" events. Presumably this anthropomorphized category of God, a category deeply rooted in the mind of believer and unbeliever alike, is of a God who relies on the laws of science to run the cosmic show when He is absent on holiday, sleeping or has to use the convenience. Presumably these atheists believe that since the number of well documented "interventions" is close to nothing, then absence of evidence of God is evidence of absence and you can't be more absent than not existing. Shucks, I never thought of that one.
Sunday, September 21, 2008
The End of The Logical Line
It has long been clear to me that the conceptual artifacts of science are a means of pattern description; some patterns are simple and can be reduced to elegant equations and other patterns, like say random sequences, are complex and remain as unreduced data, constrained only by statistical description. The two objects of elegant equations and statistics are beautifully joined in the composite of Quantum Theory.
The noumenalogical/ontological status of descriptive theoretical artifacts is philosophically problematic, but one thing that the subject of computation has made clear: the pattern description of science classifies as a form of data compression. This data reduction/compression is obliged to stop at some point with an incompressible kernel of ‘brute fact’.
Given that computation has provided such a clear insight into the process of scientific pattern description one is left wondering whether science really ‘explains’ anything at all in an absolute sense, if indeed ‘absolute explanation’ is an intelligible concept. In science we are in effect merely discovering how to compress myriad diverse potential observational protocols into elegant theoretical descriptions.
The data compression embodied in a theoretical artifact feeds the intuition that with a reduction in mathematical complexity there comes a concomitant reduction in mystery. Theoretical constructs, it seems, are converging toward a narrower and narrower ‘mystery gap’. In fact a naive and erroneous extrapolation might suggest that the ultimate conclusion of our theoretical endeavors will be a description string of zero length thus ending all mystery! Of course, this is mathematical nonsense; an irreducible descriptive kernel always will remain. But for the philosophically naive the reduced ‘strings’ of theoretical science look like small logical gaps that may one day be eliminated completely. Hence naive atheism believes the squeeze is on for the naive God of gaps theists. And yet here is the irony: naive atheists and naïve theists think in exactly the same categories: viz that science is a logical gap reducing process. However, unlike the naive atheists who wish to eliminate the apparent logical gaps, the naive theists yearn for irreducible gaps as the savior of faith, whether in the form of irreducible complexity or as the ‘in yer face’ gaps of miracles. The naive atheists seek to minimize the gaps, whilst the naive theists do all they can to either retain or maximize the gaps. And yet both intuitively seem to agree on one point: namely, the view that theoretical explanation renders Deity, or more precisely Aseity, redundant.
Somehow the view is held by both naive atheism and naive theism that a protocol element explained within a theoretical context somehow reduces its burden of contingency. But in an absolute sense the complexity of the phenomenon remains: at least in terms of the number of existing protocol elements which remain the same, albeit described with an elegant mathematical object. The compelling philosophical gut feeling that Liebnitz’ principle of sufficient reason is hiding somewhere isn’t satisfied and on that issue success at elegant pattern explanation takes us no further forward.
It is a telling irony that someone such as myself is likely to be accused of being a deist by both theists and atheists: the reason for this accusation is that both parties have the same philosophical categories. Both parties hold the view that explanatory structures reduce the burden of contingency and hence conclude that a well explained phenomenon, like say the evolution of life, may be thought of as serving notice on any deity (or aseity) with the job of managing and sustaining it.
But to confuse our theoretical artifacts with ultimate explanation is a bit like saying that the elements of a computation are created and sustained only by their program. The program is only a means of describing the computation – there is of course the much deeper background reality of the hardware which creates and sustains the individual computation events. And so as far as I am concerned the hunt for Asiety goes on.
Tuesday, September 16, 2008
If this story (whose link I picked up from Uncommon Descent) is correctly reported, then it's a sample of what some members of the Royal Society will do to an evolutionist who makes the very mild suggestion that it might be a good idea to give at least some space to rebuffing creationist concepts in class should they be mooted by pupils. The evolutionist in question is Professor Michael Reiss who said that creationism should be discussed in science lessons if pupils raised the issue. The subsequent controvesy, partly resulting from Reiss' words being mistintrepreted as a recommendation to teach creationism, lead to his resignation.
It's all so horribly reminiscent of the inquisition and the fanatical ferreting out of heretics on the slightest whiff of heresy. In this context the ID community’s rubric “Expelled” doesn't seem so far from the mark. I may not (yet) agree with their main thesis, but I’m behind the Uncommon Descent's criticism of the Royal Society. This isn’t science; this is politics; no, make that 'this is religion'.
The Pharisees of Science: Good and fair minded science is crucified at the Royal Society
They strain out a gnat but swallow the camel (Mat 23:23ff)
Sunday, September 07, 2008
Nebulous Notions
Here’s an interesting book: “The Cloudspotter’s Guide” by Gavin Pretor-Pinney. Well written as well as entertaining, this book is working against the twin prejudices of the commonplaceness of clouds and the bad press they get. The book has no truck with sayings like “I’m under a cloud” and does an excellent job of introducing the silver lining of the subject. Ever since reading it I’ve been looking up at the sky with renewed awareness and increasing understanding as I have honed my cloud categorization skills; if indeed something as woolly and varied as clouds lend themselves to categorization. As is often the case naming and categorization brings new recognition sensitizing one to a world often looked at but never really seen. In order to change my perspective I have tried to see cloudscapes as if I am looking down on them rather than up at them. This book not only opens up an understanding of clouds but also an appreciation of the beauty of the prosaic as it comfortably mixes art, science and adventure in one volume.
We may think clouds to be too banal a subject to lead to profundities, yet for Pretor-Pinney it occasionally leads into to some very deep waters indeed:
1. Like global warming….
According to Pretor-Pinney, “Clouds are the wildcards in climate change predictions.” Clouds blanket as well as reflect heat and so have competing effects on global warming. Moreover, according to Pretor-Pinney “..we are so ill equipped to anticipate what a rise in global temperatures would mean to the nature of cloud cover”. In the face of uncertainty Pretor-Pinney, taking a hint from Pascal's wager, adopts a ‘better safe that sorry’ policy toward CO2 emissions.
These questions over global warming reminded me of the occasional articles about the subject posted on William Dembki’s Intelligent Design blog, Uncommon Descent. Usually these posts are very critical of Global Warming sciences. See for example this post by DaveScott. In the comments section I express surprise at DaveScott’s claim that the CO2 scrubbing effects of precipitation aren’t taken into account in global warming models.
This is not to say that I’m a convert to Uncommon Descent’s jaundiced view of global warming sciences. Actually I’m not going to get into this argument as I’ve got my hands more than full with the ID/evolution issue. In any case I tend to go along with Pretor-Pinney’s better safe than sorry policy*. However, that Uncommon Descent should sing an off beat tune on global warming is perhaps not too surprising: as a result of their ID views they probably feel marginalized and beyond the pale of the larger scientific community, so perhaps the alienation and distrust they already feel toward the academic community makes it easier from them to revaluate global warming sciences and come to a contrary position.
One thing that my dabblings in the ID/evolution debate have shown me is just how far human factors drive the logical facades of scientific endeavour. It seems to me that it is not too strong a language to describe many evolutionists and ID aficionados to be “converted” to their cause and crowd factors are never far away: group identification, group protection, those you call liars and those you trust, those you hate and those you admire, those who repel you and those you follow. Above all there are strong vested interests in group worldviews and in their defence there even arises the old ‘champion’ idea. Dissenting scientists William Dembski and Michael Behe, in their resistance against the evolutionary scientific establishment, rerun the time honoured and archetypical battle of David and Goliath.
As an academic of conflict studies once put it: Opposing sides often have the feeling that their bastard is much worse than anyone else’s bastard, and so hatreds run deep. Nervous persecuted minorities and even majorities may fancy they see a malign and secret conspiracy behind their particular bastard. But perhaps I’m not immune from fancying I see conspiracy lurking behind the scenes: I am just a little concerned about Uncommon Descent’s connections and whether this impacts upon their view of global warming. Conversely many fear that global warming is a scare story used to excuse political control.
2. Like science verses art…
Using quotes from Thoreau and Keats Pretor-Pinney typifies how so often the poetic/artistic mentality sets its teeth against science:
Thoreau: You tell me [the colouring of the clouds] is a mass of vapour which absorbs all other rays and reflects the red, but that is nothung to the purpose, for this red vision excites me, stirs my blood, makes my thoughts flow… what sort of science is that which enriches the understanding, but robs the imagination?
Keats: Do not all charms fly at the mere touch of cold philosophy? Philosophy will clip an Angel’s wings, conquer all mysteries by rule and line, empty the haunted air, and gnomed mine – unweave a rainbow, as it erewhile made the tender-person’d lamia melt into the shade.
Pretor-Pinney observes humouressly:
I can see what Thoreau and Keats mean. But they do sound a bit like the arty kids in class taunting the science nerds. Having opted for all sciences at secondary school, I have painful memories of bullying classmates goading me for emptying the haunted air, and gnomed mine. OK, they might not have quite put it in those words, but the sentiment was the same.
And so the apparently competing polarities of science versus art, analysis versus intuition, cold description versus beauty, demystification versus mystery, knowledge versus feeling, head versus heart are sustained. But why? Why is this polarization such a common theme? Why does the elementalisation that science appears to introduce grate so with the poetic mentality, a mentality that revels in unreduced experience? Why do people hark back to an imagined rustic idyll when life was more instinctual and intuitive? Why are the objects and activity of science regarded as soulless? Why is science’s analytically reduced reality considered so profane in comparison with the unreduced reality of the mystic? Is the analytical left-brain to be forever at odds with the intuitive right brain? These are questions that I think I will leave for another post!
On the last page of his book Pretor-Pinney tells of the ‘right brain’ response of an Australian glider pilot who surfed the morning glory clouds of northern Australia: “Up in the clouds you can’t help have a belief in the creator”, said the pilot who I suspect would be unable to reduce this compelling intuition into its components. It is surely not a coincidence that the doyens of the intelligent design movement make so much of the concept of the irreduciblity of organic complexity as they seek the edge of knowledge, an edge beyond which heartfelt religious intuition is mooted as the guide rather than cold analytical skills (a view I would dispute).
As for Pretor–Pinney he favours a symbiotic rather than schismatic relation between left and right brain reactions (after all the two halves do live in the same skull and one therefore suspects a complimentary relation between such close partners). So let me leave the last words to him:
Cloudspotters will float above these petty divides between science and art – float above them like our fluffy friends. For us there is no contradiction in regarding the clouds in ways that both stir our blood by exciting our imagination and enrich our understanding with ‘cold philosophy’.
I heartily agree. Perhaps having your head in the clouds is not such a bad idea after all.
The web site of the cloud appreciation society can be found at:
* I’m inclined to follow Pretor-Pinney on this, but there seems to be unknowns round every corner. Who knows the perturbing effects on economic realites of the costs of emission efficiency?
Friday, August 01, 2008
Everyone with a theistic worldview is Dawkins bashing nowadays. I’m not a natural atheist basher myself: Given the human predicament the atheist position is at least plausible and deserves some respect and consideration, especially as much of the religious world is not only crackpot but is so horribly blighted by a mindless bullying hegemony. However, although I’m not a natural enemy of atheists, the impassioned and polarised state of the debate, especially in America, forces one to choose sides. Protagonists like Richard Dawkins seem determined to make enemies of all who don’t subscribe to their take on the subject of Primary Ontology, which to be fair is a highly speculative topic that really demands a tentative and trial approach rather than a violent melee of “we don’t take prisoners” crusaders. Ok then Richard, have it your way; you’ve got a thoroughly alienated enemy here. Happy now? To this end I reproduce below an otherwise private article I wrote in 1993 that was a response to Richard Dawkins’ article in the New Statesmen in 1992. If you can’t beat the Dawkins bashers join them! Makes a change from fundamentalist bashing I suppose!
Knotes on Richard Dawkin’s article “Is God a Computer Virus”, New Statesmen Dec 1992
by Tim Reeves 18/9/93
Revised January 1997
1) RELIGIOUS RAMBOS exercising their believing muscles, nervous wimps in religion, wild red blooded Catholics, and virtuoso believers skilled in the arts of believing the unbelievable, are all some of the characters one meets in Richard Dawkin's article "Is God a Computer Virus” (New Statesman, Dec. 92). I found the subject matter of the article laughably caricatured and I wasn't sure how seriously the proposition was to be taken. But having heard and read Dawkins in the past I think he is serious, although he takes far from seriously those who are the subject of his thesis. His ideas do have a bearing in some quarters, especially the cults, but for myself I found it difficult to identify the faith of some of the people I meet, or my own faith, with what Dawkins describes. However, if Dawkins is right, then it is unlikely that I could make such an identification, because it is no doubt part of the survival strategy of the "faith virus" to be difficult to detect by victims. It is, therefore, difficult for a person of faith to oppose the faith virus theory by attempting prove that they are not a victim, because the theory probably implies that the virus is likely to induce its victim to try and do this any way in order to enhance its survivability. Thus my opposition, as a person of faith, will hardly count as evidence. In this respect Dawkins faith virus theory is remarkably like the faith virus itself in that one can say of both, to quote Dawkins, "Once the proposition is believed it automatically undermines opposition to itself"!! The faith virus theory is also self-referential like the faith virus. But I am not going to be too hard on Dawkins here; Self reference, particularly of the self supporting stable kind, as I will go on to show, is not necessarily a bad thing. But let us note the irony in Dawkins thesis !
2) THE PENULTIMATE PARAGRAPH of the article is really the most interesting bit. Here, after considering and lampooning (harpooning?) those wallowing in the sea of faith, this solid, no nonsense, bah-humbug biologist attempts to put his intellectual anchors down on what he thinks is the firm bedrock of science. As we know, this bedrock, in a philosophical sense, is far from firm. Who is going to tell him ?
3) AS DAWKINS SMUGLY throws out an anchor in the penultimate paragraph it seems to me that his anchor simply catches on to the very raft on which he is standing. His justification of science in terms of exacting selective scrutiny of concepts, non-capricious ideas, evidential support, repeatability, progressiveness, independence of cultural milieu etc., itself appears to be a scientific view of science, presumably based on some juxtaposition of a theoretical conception of science and observations of it. Now, I don't want to be misunderstood here; some might think that this scientific self referencing is sufficient to rubbish Dawkins, but for myself, not only do I tend to agree with his views on science (although they need further scrutiny), but I find no a priori problem with self-description, self-reference, or self-justification. For the moment, however, let us be aware that it is happening, covertly, in this penultimate paragraph, where Dawkins anchors science to science, and let us, once again, make a note of the irony.
4) RUSSELL'S PARADOX, which was a famous contradiction arising from self-referencing or self-descriptive statements, was solved by doing not much more than simply disallowing such self-referencing. However, this solution, although valid, was highly artificial. Self-description and self-reference can and do exist, but if we allow them to exist in our knowledge there is a price: Self-description and self-reference are forms of feedback, and therefore if we accept them we also have to accept that in analogy with systems where feedback exists, we will have the possibility of both stable and unstable conceptual behaviour. The stablility of non-contradictory thinking contrasts with the unstable world of contradiction where we find a cognitive analogy to the oscillatory or chaotic behaviour produced by certain types of feedback. The liability of unstable cognition is a consequence of the human ability to have knowledge about knowledge, and it becomes a possibility as soon as we allow even subtle updates in what we know about our knowledge to register themselves as part and parcel with that knowledge. Of course, in mathematics, one can attempt to rule out this conceptual feedback, but in the real world of cognition it exists, and Russell's solution, although fine for set theory, is only an artificial device.
5) I HAVE A THEORY that at least part of the reason for the demise of 18th century rationalism was the rather unnerving effects of conceptual feedback. With the early success of science and the consequent feeling that the race was onto a good thing, it is not surprising that eventually a rational view of rationalism would be attempted. Thus, as in various philosophical endeavours understanding sought to understand itself, what was to prove a very dangerous loop was quietly closed, and a trap was set for any who might leave the straight and narrow. A little knowledge was to prove a dangerous thing, and so in a series of philosophical debacles the painful signals of conceptual feedback started to flow like blood in a previously constricted limb. Extreme empiricism in the form of positivism discovered unstable feedback as soon as people started asking whether the verifiability principle was verifiable. Kant, in his search for a-priori synthetic knowledge about the world, failed to get to the other side of the cognitive interface we have with it, and was left dumbfounded by the result that mind appeared to be justified by mind, and he virtually lost contact with the "external" world. Darwin sensed the possibility of unstable feedback as he mused over how an evolutionary system, which appeared to be governed only by a survivalist ethic, had any obligation to produce minds that could understand evolution. The ulterior motives that sometimes lurk behind reflections such as these can be highly self-destructive. On many issues (but certainly not all) high standards of empirical verification and/or testing are possible, and this is capable of supporting a healthy level of scepticism because providence allows its satiation with sufficient empirical demonstration. However, this scepticism has a tendency to become more extreme, thoroughgoing and demanding, perhaps as a result of a proud desire for intellectual self sufficiency founded on "absolute" knowledge. And so, as if in a judgment on the abuse of providence, this scepticism is permitted to start to doubt and therefore undermine the a-priori methods, assumptions, and mental toolkit that providence supplies in order for scepticism to go about satiating itself in the first place. Thus, without taking the utility of these gifts of providence for granted, human scepticism remains deeply unsatisfied. It is as if the stomach, in it’s craving for food, was to start digesting itself.
6) THE PHYSICAL SCIENCES are served by a meritocratic elite, holders of strange and deep secrets who express themselves in obscure technical and mathematical language that not many can fathom. This knowledge, some may say, and many appear to accept, is the key to the mystery of life, the universe, and everything; every academic subject is just a footnote to physics; the physicist Richard Feynmann called the social disciplines pseudo science, and the theoretical physicist Stephen Hawkin writes the wave function for the universe! Whether right or wrong, the attitudes underlying this sort of thing, are asking for trouble; some of the gut feelings at the bottom of both religious and secular humanism are offended here.
At the heart of humanistic endeavours there seems to be a necessity to have a basic a-priori optimism about the possibilities open to human achievement and its ability to eventually attain peace, justice and fairness. If this, what sometimes appears to be crass optimism, didn't exist it is unlikely that humanistic projects, like Marxism (and Fascism), for example, would ever get off the ground. Hence, it may be felt that a situation where an intellectual elite hold the keys of all knowledge just can't be right; it cuts across humanistic optimism; it isn't fair, it isn't accountable, it isn't egalitarian; surely the universe is amenable to a more socialist approach? Worst of all, I suspect, is that it also cuts across some people’s own taste for intellectual hegemony. So, it is with great glee that some of these people pounce on the discomfiture of science found in the great feedback debacles, where the physical sciences appeared to lose something of their absoluteness. Moreover, Kant had showed that the human element must be highly active in the pursuit of knowledge. So, in the battle for academic and intellectual hegemony there have been those who, jealous of the position and achievements of the physical sciences, have so emphasised the human element in the quest for knowledge that one can be forgiven for thinking that they are suggesting this element is all there is to it, and that human and cultural studies are therefore more fundamental. Some of the more extreme disciples of social historicism, objective idealism, dialectical materialism, existentialism, and subjective idealism appear to be consumed by ‘science envy’. There may be something to be said for all these points of view, but when they become fortresses in the battle for intellectual domination, the trappings of offence and defence do not help toward an impartial consideration of them. It is as if opthalmists were to claim that the meaning of life, the universe and everything is to be found by studying the eye. After all, a case could be made for this in as far as much of life revolves round sight! The motives behind the intellectual anti-science culture are not only clear, but it is also clear that it is a culture that has potentially worse conceptual feedback problems than the physical sciences. The latter may claim, (although it can never assuage absolute scepticism) that the world of physical laws is unchanged by thought and culture, and that science therefore has the effect of anchoring knowledge. But if, in contrast, knowledge is only part and function of culture, then as it seeks to know that culture (of which it is a part) it will in turn change that culture, thus in turn altering itself. Therefore, it will find itself following a moving target leading, perhaps, to a runaway feedback situation resembling a dog chasing it's own tail. Perhaps a run around of this kind is precisely what is happening in our society!
7) CONCEPTUAL FEEDBACK is inevitable, but an important question is, is the feedback we are interested in stable? If it was not possible to attain stable knowledge we could not know about this instability ourselves in a stable way because if we did, then it would, of course, contradict this condition of universally unstable knowledge. However, if some truly stable knowledge existed, then a stable belief in stability could be part of that stability, and therefore, a-priori stable conditions admit the possibility that we could know of this stability in a stable way. (Got that?) At least a modicum of a-priori stability is required for stable knowledge; we would not be the creators of this stability - it would just have to be accepted and exploited, as probably happens in the physical sciences. Moreover, the physical sciences seems to be blessed with a high proportion of solid and reliable types who tend not to endlessly analyse their assumptions, thus closing the feedback loop, but instead are inclined to go ahead and exploit their "hard", "firm", "soft", and sometimes thoroughly "wet" brainware to the full. No wonder the social and human disciplines find progress more difficult! If some of the students of these disciplines concentrated less on making a style out of defining and redefining themselves, of constantly being insecure about making assumptions, and of tampering with a mental toolkit they don't understand, along with various other behavioural affectations, then they might find progress easier! I therefore have great sympathy with Dawkin's implicitly self-referencing, but stable, characterisation of science. However, one may ask what, apart from gut reaction, makes Dawkins object to some of the equally stable "faith viruses" he describes?
8) THE ARTICLE FAILS to elucidate the reason for this gut reaction because it gives little space to the question of just what characteristic makes a virus a virus. If all that characterises a virus is that it is a resilient self-perpetuating packet of information then I suppose one might argue that knowledge of how to open a door is a virus. That latter is a concept that spreads from human to human and even cats and dogs have been observed to catch this very successful cognitive virus. In this sense any useful piece of information becomes a virus. But what marks out a useful piece of information from a virus is the former’s role in relation to a larger context; what distinguishes the door opening concept from a virus is that the former takes part in a wide symbiosis: Without the door opening concept life as a whole would become very difficult, whereas without a virus life is not difficult; on the contrary life's identity and stability is usually enhanced by the absence of a virus, whereas the absence of useful information not only diminishes life’s identity and stability, but may even make life impossible because life is dependent on useful information. In contrast, life is not necessarily dependent on a virus, although the virus is inevitably parasitic upon life and therefore necessarily dependent on it. In a strange inversion of transactive justice the virus may even exact a cost on the host for the privilege of helping to maintain the viral identity; namely chaos in the host. In a word the virus gives nothing and takes all just short of the final extinction of the host. So, is the God concept of this ilk? I would say no; it is part of a wider symbiosis, and a lot more than that!
9) THE NARROW CONFINES of extreme forms of reductionist materialism, dialectical materialism, social historicism, objective idealism, existentialism and subjective idealism may be dogmatic about what can be, but for myself I found that I could never claim to know enough to be able to say "there is no God". For all I knew the notion of God could be both intelligible and real. Take the issue of intelligibility: Is the concept of God so diffuse that it is meaningless? Dogmatic "intelligibility atheism", like "ontological atheism" founders on the inevitable finiteness of experience and knowledge; my concepts of complex things such as "personality", "human beings", social interactions and even complex computer behaviour may also be rather diffuse. But clarity comes with experience and learning, and, moreover, there seems to be a rough rule that the less trivial and more significant something gets the less amenable it is to the immediate senses and lower cognitive functions. If God existed, then like many social entities, there was going to be a problem for me in grasping both the meaning and reality of God. No "proof of God" was ever likely to be found that was big and complex enough and I could no more expect to "see" God than I could expect to "see", other than metaphorically, a personality or a society. What should one do when faced with these uncertainties? Should one commit oneself to atheism, God, or nothing at all? To me there seemed to be no middle way. Like a man in an aircraft going down in flames, I had only two options: stay with the aircraft or bale out. But I had an advantage over the man in the burning aircraft: He could make a wrong decision; the plane may or may not crash badly or he may or may not muff the parachute drop. Christian living appeared to do good things to people, things that a thorough going secular philosophy failed to do. So even if this God business was rubbish I had little to lose by giving it a try. Born out of uncertainty and the need to act was the realisation that, as Pascal noted, opting for God was the better half of the bet. I couldn't go wrong. So I made my choice and it turned out to be the best thing I ever did! And so it should have been; absence of proof or disproof proves nothing; if God was logically meaningful and ontologically real, and moreover personal and relevant to my existence, then positive evidence was obliged to come along eventually. It has been said that assertions of existence are scientifically intractable because one has to look all over the place to prove or disprove them; however, the logic changes a little if that which is asserted to exist comes looking for you!
10) LACK OF BALANCE is how I would describe Dawkin’s article in New Statesman. In trying to maintain a balance myself, I would acknowledge that there might be circumstances to which Dawkin’s ideas apply, but there are also religious connections to which they do not. For myself, and some other nervous wimps in religion, Dawkins ideas are inapplicable because experience, evidence, reason and philosophy play an important role in nuturing and maintaining faith. For example, the inevitable level of giveness in the universe, the consciousness discontinuity, historical evidence re the life of Christ and His resurrection, personal experiences and the synchronous nature of certain events in one's life, all act in the germination and maintenance of faith. Of course, in comparison with the kind of secular intelligentsia that Dawkins represents one might appear as a mistaken fool about all this, but, nevertheless, we are talking here of something far removed from the sort of "faith" described by the article. In the article we find reference to a kind of fideist and gnosto-dualist faith that is self supporting in the sense that it loves itself more and more as it is less and less contaminated by the profanity of evidential or reasoned support of any kind.
Much more could be made of the "compost" of experience, and evidence that help nourish the seeds of faith during growth, but I would like to pick up something which is more in line with the theme of self-reference.
11) DAWKINS IS RIGHT, in a sense, that a faith like the Christian faith does have a considerable conceptual stability arising out of its self-referential nature. However, this self referential stability exists not in the way described in the article, but as a result of the Christian belief in a loving personal God. To see this, contrast it with the opposite view, namely, that the world, apart from oneself, is primarily and fundamentally apersonal and/or disinterested. Influenced by a belief (for such it is) of this sort, one may rightly question and feel sceptical about whether one's knowledge represents anything at all; apersonal parties and/or principles, by definition, carry with them no a-priori absolute guarantee of the representational nature of any knowledge. The belief that the universe is primarily disinterested and/or apersonal is not only inclined to violate the foundation of knowledge, but it may start to undermine itself as one may wonder whether this belief itself actually represents anything. The result is that you are either left with nothing or next to nothing, or less honestly you fudge the issue by becoming philosophically diffuse and won't dare admit to holding to anything resembling a belief in truth. At most there might be an acknowledgement of the utilitarian value of beliefs (presumably itself a belief). This tendency toward nihilism, or at best "minisculism", instability and confusion contrasts with the stability of a belief in a personal loving God. If I hold such a belief I am more likely to see accurate representational knowledge as an outcome of God's love for me. Needless to say, the very belief in a loving God is itself seen as the providential outcome of that love because it is believed to be imparted by a God who desires to reveal to us not only truths, but above all Himself. This belief is self-referencing, but it is, of course, highly stable because it is self-affirming. Moreover, it encourages a proactive growth in knowledge, as this growth no longer seems a pointless exercise; in this context knowledge is believed to mean something. Therefore, if there is a God, the belief in God will further reinforce itself as further knowledge and experience of God's love and providence is sought for, and inevitably gained. This growth and reinforcement will depend on that knowledge and experience being of the corroborating kind; but for it to start at all one must first start with God. My view is that to do so is to concur with an essential component of one's metal toolkit. There is no way one can from an absolutely sceptical basis "prove" this starting point without getting into unstable conceptual feedback cycles; we just have to assume it and exploit it. We are absolute dependents whose first premise is "In the beginning God ...."
c. T.V Reeves 1993
Sunday, July 13, 2008
Celebrity Death Match: Dembski vs. Bentley
The Swot
The Hulk
Here’s an unusual and novel juxtaposition of protagonists: see here and here. William Dembski, Intelligent Design guru, does a Todd Bentley meeting! It vaguely reminds of the spate of postmodern films that bring incommensurable super heroes into collision: e.g Miss Marple vs. The Terminator or something like that.
I didn’t know whether to put this post on “Quantum Non-linearity” or on “Views News and Pews”, so I’ve posted it on both. It is clear, as the second link reveals, that Dembski was left with a very unfavourable impression of Bentley (I dread to think that it could have been otherwise). It’s interesting to see that Dembski made the very same observation that I made when I went to a Benny Hinn meeting: “…the exodus from the arena of people bound in wheelchairs was poignant.” But I hasten to add that I did not, like Dembski, travel many miles to get to my meeting: in order to save rental costs the financially savy Hinn organization, conveniently for me, decided to bring their show to Norwich football ground which is less than 20 minutes walk from my house. I certainly did not drag my family along; instead I went, as usual, in the capacity of an amateur researcher with camera and notepaper.
One of the commentators on Uncommon Descent takes Demsbki to task for even giving Bentley’s ‘healing miracles’ the slightest of credence from the outset. Fair comment except that Demsbki has a severely autistic son, and so he was understandably vulnerable to the ‘clutching at straws’ effect. Typically of this kind of Christian scene there is an exploitation of the emotions associated with the unknown, especially fear of the unknown. It’s all to easy to follow a false trail for something you really want: you hope against hope that the next corner or the next horizon will reveal the vista you are longing for. It never comes, but whilst you are in a state of ignorance the sheer hope strings you along. And when the carrot of hope fails to lead you up the garden path, there is the stick of fear, fear that an inscrutable and unknown god might just be revealing himself in the utterly unpalatable and who knows what displeasure he will visit upon those who do not swallow it. It’s all a very Pagan view of God: it is a ministry that trades on fear, ignorance, numinous dread, submission, and above all on the notion of an unaccountable angry god whose actions are to all intents and purposes arbitrary. Pagan practices down the ages have thrived on this: I'm reminded of a burial site next to the Cursus in Dorset where a Neolithic woman and children were found buried by archeologists who had a sneaky suspicion that they were uncovering a tragic story of human sacrifice: what satanic things humans can screw themselves up to do when they believe they are sanctioned by the Divine.
Is there a connection here with Intelligent Design theory? I hope not. I respect the efforts and faith of Dembski and his many followers who are carrying out a valuable critique of evolution and are presenting worthy challenges to people who think they believe in evolution. However, one of my niggles with ID theory is that introduces an arbitrariness. In ID theory “Intelligence” is used as a kind of wild card or black box notion: “Intelligence is as intelligence does”. The broad sweep of paleontological change, which at least presents a prima facie case for evolution, then fails to cohere; it is like a story that seems to be a story but of which we are told is no story after all. I have yet to find out how ID theory accounts for paleontological history and so far ID seems to be a theory of negation, a case of showing that evolution isn’t possible. After that it’s the old the shrug of the shoulders, the appeal to the inscrutable and the wild card, and sometimes there is the dark hint that anyone asking deeper questions are rushing in where angels fear to tread. |
c328c8e7f80689a3 |
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
Quantum time corr: expectation value of particle motion in Schro. pic
1. Mar 9, 2013 #1
The expectation value of motion of a particle over a time interval t-to is
C(t,to) = <0|x(t)x(to)|0>
(product of position operators in Heisenberg representation for ground state harmonic oscillator)
2. Relevant equations
Schrodinger picture:
<ψ(t)|Ω|ψ(t)> = <ψ(to)|U+(t,to) Ω U(t,to)|ψ(to)>
I believe:
U(t,to) = exp[-iωt/h]
U+(t,to) = exp[iωt/h]
Heisenberg rep: time-indep states, time-dep operators
Schrodinger rep: time-dep states, time-indep operators
3. The attempt at a solution
I'm confused about how to convert x(t) in the Heisenberg representation to the Schrodinger representation and how to incorporate it into C(t,to).
Here is what I scribbled:
<x(t)> = <ψ(t)|x(to)|ψ(t)> = <ψ(to)|exp[iωt/h] x(to) exp[-iωt/h]|ψ(to)>
= <0|exp[iωt/h] x(to) exp[-iωt/h]|0>
Is the first equality even true?
The question seems like it should be simple, but I am sincerely confused. Thanks for your time and for any help!
2. jcsd
3. Mar 10, 2013 #2
In the Heisenberg picture all time dependence is attributed to the operators, the states carry no time dependence. In the Schrödinger picture this is reversed, the operators are constants but the states change in time.
The connection between the pictures is established by requiring physics to be unchanged. That is expectations values must give the same numbers in both cases.
This is a little wrong, why do you write ωt/h?
In the Schrödinger picture you can formally solve the Schrödinger equation by means of the evolution operator U(t,t0) = exp[-iHt/h], where H is the Hamiltonian of the system. So the states will be given by ψ(t)> = U(t,to)|ψ(to)>. To go to the Heisenberg picture you simply have to get rid of the time dependence so you can write |ψ(Heisen.)> = U+(t,to) U(t,to)|ψ(to)> = ψ(to, Schr.)> due to the unitarity of the evolution operator.
Now you can simply apply the same thing in any of your expectation value equations and find how your operators must change in order for the expectation values to be invariant.
4. Mar 10, 2013 #3
Hello and thanks for responding. The omega represents energy and the h is supposed to be h bar. I could see how that would look strange actually, so I won't use omega anymore.
Anyway, here is another attempt:
<0|x(t)x(to)|0> = <0|exp[iHt/h]exp[-iHt/h] x(to)x(to)exp [-iHt/h]exp[iHt/h]|0>
= <0|x2|0> = h/(2mω)
This doesn't match my answer for solving it another way :/. I think I understand what to do conceptually, but not mathematically. Are the exponentials supposed to cancel? Aren't those what make the states time-dependent?
I hope I am making some amount of sense. Thanks again.
5. Mar 10, 2013 #4
How did you get exp[iHt/h]exp[-iHt/h] on either side?
The correct way should be:
In the Heisenberg picture you have <0|x(t)x(to)|0>
So using the evolution operator to switch the states to the Schrödinger picture you get:
<0|exp[iHt/h] x(t) x(to) exp[-iHt/h]|0> = <0|U+ x(t) x(to) U |0>
So in order for the expectation value to be invariant you must write
<0|U+ U x(t) U+ U x(to) U+ U |0> = <0|x(t)x(to)|0> due to unitarity of U.
So the operators must change like U x(t) U+
Hope this helps. Note the ambiguity when i write x(t) even in the Schrödinger picture.
Have something to add?
Draft saved Draft deleted
Similar Discussions: Quantum time corr: expectation value of particle motion in Schro. pic |
ffa0333582ca32af | All Issues
Volume 38, 2018
Volume 37, 2017
Volume 36, 2016
Volume 35, 2015
Volume 34, 2014
Volume 33, 2013
Volume 32, 2012
Volume 31, 2011
Volume 30, 2011
Volume 29, 2011
Volume 28, 2010
Volume 27, 2010
Volume 26, 2010
Volume 25, 2009
Volume 24, 2009
Volume 23, 2009
Volume 22, 2008
Volume 21, 2008
Volume 20, 2008
Volume 19, 2007
Volume 18, 2007
Volume 17, 2007
Volume 16, 2006
Volume 15, 2006
Volume 14, 2006
Volume 13, 2005
Volume 12, 2005
Volume 11, 2004
Volume 10, 2004
Volume 9, 2003
Volume 8, 2002
Volume 7, 2001
Volume 6, 2000
Volume 5, 1999
Volume 4, 1998
Volume 3, 1997
Volume 2, 1996
Volume 1, 1995
Discrete & Continuous Dynamical Systems - A
2014 , Volume 34 , Issue 2
Select all articles
Gravitational Field Equations and Theory of Dark Matter and Dark Energy
Tian Ma and Shouhong Wang
2014, 34(2): 335-366 doi: 10.3934/dcds.2014.34.335 +[Abstract](207) +[PDF](549.4KB)
The main objective of this article is to derive new gravitational field equations and to establish a unified theory for dark energy and dark matter. The gravitational field equations with a scalar potential $\varphi$ function are derived using the Einstein-Hilbert functional, and the scalar potential $\varphi$ is a natural outcome of the divergence-free constraint of the variational elements. Gravitation is now described by the Riemannian metric $g_{\mu\nu}$, the scalar potential $\varphi$ and their interactions, unified by the new field equations. From quantum field theoretic point of view, the vector field $\Phi_\mu=D_\mu \varphi$, the gradient of the scalar function $\varphi$, is a spin-1 massless bosonic particle field. The field equations induce a natural duality between the graviton (spin-2 massless bosonic particle) and this spin-1 massless bosonic particle. Both particles can be considered as gravitational force carriers, and as they are massless, the induced forces are long-range forces. The (nonlinear) interaction between these bosonic particle fields leads to a unified theory for dark energy and dark matter. Also, associated with the scalar potential $\varphi$ is the scalar potential energy density $\frac{c^4}{8\pi G} \Phi=\frac{c^4}{8\pi G} g^{\mu\nu}D_\mu D_\nu \varphi$, which represents a new type of energy caused by the non-uniform distribution of matter in the universe. The negative part of this potential energy density produces attraction, and the positive part produces repelling force. This potential energy density is conserved with mean zero: $\int_M \Phi dM=0$. The sum of this potential energy density $\frac{c^4}{8\pi G} \Phi$ and the coupling energy between the energy-momentum tensor $T_{\mu\nu}$ and the scalar potential field $\varphi$ gives rise to a unified theory for dark matter and dark energy: The negative part of this sum represents the dark matter, which produces attraction, and the positive part represents the dark energy, which drives the acceleration of expanding galaxies. In addition, the scalar curvature of space-time obeys $R=\frac{8\pi G}{c^4} T + \Phi$. Furthermore, the proposed field equations resolve a few difficulties encountered by the classical Einstein field equations.
Ergodicity criteria for non-expanding transformations of 2-adic spheres
Vladimir Anashin, Andrei Khrennikov and Ekaterina Yurova
2014, 34(2): 367-377 doi: 10.3934/dcds.2014.34.367 +[Abstract](111) +[PDF](392.8KB)
In the paper, we obtain necessary and sufficient conditions for ergodicity (with respect to the normalized Haar measure) of discrete dynamical systems $\langle f;\mathbf S_{2^-r}(a)\rangle$ on 2-adic spheres $\mathbf S_{2^-r}(a)$ of radius $2^{-r}$, $r\ge 1$, centered at some point $a$ from the ultrametric space of 2-adic integers $\mathbb Z_2$. The map $f\colon\mathbb Z_2\to\mathbb Z_2$ is assumed to be non-expanding and measure-preserving; that is, $f$ satisfies a Lipschitz condition with a constant 1 with respect to the 2-adic metric, and $f$ preserves a natural probability measure on $\mathbb Z_2$, the Haar measure $\mu_2$ on $\mathbb Z_2$ which is normalized so that $\mu_2(\mathbb Z_2)=1$.
Viscous Aubry-Mather theory and the Vlasov equation
Ugo Bessi
2014, 34(2): 379-420 doi: 10.3934/dcds.2014.34.379 +[Abstract](119) +[PDF](275.1KB)
The Vlasov equation models a group of particles moving under a potential $V$; moreover, each particle exerts a force, of potential $W$, on the other ones. We shall suppose that these particles move on the $p$-dimensional torus $T^p$ and that the interaction potential $W$ is smooth. We are going to perturb this equation by a Brownian motion on $T^p$; adapting to the viscous case methods of Gangbo, Nguyen, Tudorascu and Gomes, we study the existence of periodic solutions and the asymptotics of the Hopf-Lax semigroup.
On the existence and asymptotic stability of solutions for unsteady mixing-layer models
Tómas Chacón-Rebollo, Macarena Gómez-Mármol and Samuele Rubino
2014, 34(2): 421-436 doi: 10.3934/dcds.2014.34.421 +[Abstract](126) +[PDF](451.3KB)
We introduce in this paper some elements for the mathematical analysis of turbulence models for oceanic surface mixing layers. We consider Richardson-number based vertical eddy diffusion models. We prove the existence of unsteady solutions if the initial condition is close to an equilibrium, via the inverse function theorem in Banach spaces. We use this result to prove the non-linear asymptotic stability of equilibrium solutions.
Well-posedness and ill-posedness for the 3D generalized Navier-Stokes equations in $\dot{F}^{-\alpha,r}_{\frac{3}{\alpha-1}}$
Chao Deng and Xiaohua Yao
2014, 34(2): 437-459 doi: 10.3934/dcds.2014.34.437 +[Abstract](139) +[PDF](587.7KB)
In this paper, we study the Cauchy problem of the 3-dimensional (3D) generalized Navier-Stokes equations (gNS) in the Triebel-Lizorkin spaces $\dot{F}^{-\alpha,r}_{q_\alpha}$ with $(\alpha,r)\in(1,\frac{5}{4})\times[1,\infty]$ and $q_\alpha=\frac{3}{\alpha-1}$. Our work establishes a dichotomy of well-posedness and ill-posedness depending on $r$. Specifically, by combining the new endpoint bilinear estimates in $L^{\!q_\alpha}_x\!L^2_T$ and $L^\infty_T\dot{F}^{-\alpha,1}_{q_\alpha}$ and characterization of the Triebel-Lizorkin spaces via fractional semigroup, we prove well-posedness of the gNS in $\dot{F}^{-\alpha,r}_{q_\alpha}$ for $r\in[1,2]$. Meanwhile, for any $r\in(2,\infty]$, we show that the solution to the gNS can develop norm inflation in the sense that arbitrarily small initial data in $\dot{F}^{-\alpha,r}_{q_\alpha}$ can produce arbitrarily large solution after arbitrarily short time.
Infinitely many radial solutions to elliptic systems involving critical exponents
Yinbin Deng, Shuangjie Peng and Li Wang
2014, 34(2): 461-475 doi: 10.3934/dcds.2014.34.461 +[Abstract](151) +[PDF](428.0KB)
In this paper, by an approximating argument, we obtain infinitely many radial solutions for the following elliptic systems with critical Sobolev growth $$ \left\lbrace\begin{array}{l} -\Delta u=|u|^{2^*-2}u + \frac{η \alpha}{\alpha+β}|u|^{\alpha-2}u |v|^β + \frac{σ p}{p+q} |u|^{p-2}u|v|^q , \ \ x ∈ B , \\ -\Delta v = |v|^{2^*-2}v + \frac{η β}{\alpha+ β } |u|^{\alpha }|v|^{β-2}v + \frac{σ q}{p+q} |u|^{p}|v|^{q-2}v , \ \ x ∈ B , \\ u = v = 0, \ \ &x \in \partial B, \end{array}\right. $$ where $N > \frac{2(p + q + 1) }{p + q - 1}, η, σ > 0, \alpha,β > 1$ and $\alpha + β = 2^* = : \frac{2N}{N-2} ,$ $p,\,q\ge 1$, $2\le p +q<2^*$ and $B\subset \mathbb{R}^N$ is an open ball centered at the origin.
Variational discretization for rotating stratified fluids
Mathieu Desbrun, Evan S. Gawlik, François Gay-Balmaz and Vladimir Zeitlin
2014, 34(2): 477-509 doi: 10.3934/dcds.2014.34.477 +[Abstract](131) +[PDF](4891.7KB)
In this paper we develop and test a structure-preserving discretization scheme for rotating and/or stratified fluid dynamics. The numerical scheme is based on a finite dimensional approximation of the group of volume preserving diffeomorphisms recently proposed in [25,9] and is derived via a discrete version of the Euler-Poincaré variational formulation of rotating stratified fluids. The resulting variational integrator allows for a discrete version of Kelvin circulation theorem, is applicable to irregular meshes and, being symplectic, exhibits excellent long term energy behavior. We then report a series of preliminary tests for rotating stratified flows in configurations that are symmetric with respect to translation along one of the spatial directions. In the benchmark processes of hydrostatic and/or geostrophic adjustments, these tests show that the slow and fast component of the flow are correctly reproduced. The harder test of inertial instability is in full agreement with the common knowledge of the process of development and saturation of this instability, while preserving energy nearly perfectly and respecting conservation laws.
The fundamental solution of linearized nonstationary Navier-Stokes equations of motion around a rotating and translating body
Reinhard Farwig, Ronald B. Guenther, Enrique A. Thomann and Šárka Nečasová
2014, 34(2): 511-529 doi: 10.3934/dcds.2014.34.511 +[Abstract](160) +[PDF](454.7KB)
We derive the fundamental solution of the linearized problem of the motion of a viscous fluid around a rotating body when the axis of rotation of the body is not parallel to the velocity of the fluid at infinity.
Dynamical properties of almost repetitive Delone sets
Dirk Frettlöh and Christoph Richard
2014, 34(2): 531-556 doi: 10.3934/dcds.2014.34.531 +[Abstract](93) +[PDF](522.8KB)
We consider the collection of uniformly discrete point sets in Euclidean space equipped with the vague topology. For a point set in this collection, we characterise minimality of an associated dynamical system by almost repetitivity of the point set. We also provide linear versions of almost repetitivity which lead to uniquely ergodic systems. Apart from linearly repetitive point sets, examples are given by periodic point sets with almost periodic modulations, and by point sets derived from primitive substitution tilings of finite local complexity with respect to the Euclidean group with dense tile orientations.
Global existence of small-norm solutions in the reduced Ostrovsky equation
Roger Grimshaw and Dmitry Pelinovsky
2014, 34(2): 557-566 doi: 10.3934/dcds.2014.34.557 +[Abstract](154) +[PDF](366.7KB)
We use a novel transformation of the reduced Ostrovsky equation to the integrable Tzitzéica equation and prove global existence of small-norm solutions in Sobolev space $H^3(\mathbb{R})$. This scenario is an alternative to finite-time wave breaking of large-norm solutions of the reduced Ostrovsky equation. We also discuss a sharp sufficient condition for the finite-time wave breaking.
Global weak solutions to the two-dimensional Navier-Stokes equations of compressible heat-conducting flows with symmetric data and forces
Fei Jiang, Song Jiang and Junpin Yin
2014, 34(2): 567-587 doi: 10.3934/dcds.2014.34.567 +[Abstract](170) +[PDF](486.2KB)
We prove the global existence of weak solutions to the Navier-Stokes equations of compressible heat-conducting fluids in two spatial dimensions with initial data and external forces which are large and spherically symmetric. The solutions will be obtained as the limit of the approximate solutions in an annular domain. We first derive a number of regularity results on the approximate physical quantities in the ``fluid region'', as well as the new uniform integrability of the velocity and temperature in the entire space-time domain by exploiting the theory of the Orlicz spaces. By virtue of these a priori estimates we then argue in a manner similar to that in [Arch. Rational Mech. Anal. 173 (2004), 297-343] to pass to the limit and show that the limiting functions are indeed a weak solution which satisfies the mass and momentum equations in the entire space-time domain in the sense of distributions, and the energy equation in any compact subset of the ``fluid region''.
Superstable periodic orbits of 1d maps under quasi-periodic forcing and reducibility loss
Àngel Jorba, Pau Rabassa and Joan Carles Tatjer
2014, 34(2): 589-597 doi: 10.3934/dcds.2014.34.589 +[Abstract](109) +[PDF](319.4KB)
Let $g_{\alpha}$ be a one-parameter family of one-dimensional maps with a cascade of period doubling bifurcations. Between each of these bifurcations, a superstable periodic orbit is known to exist. An example of such a family is the well-known logistic map. In this paper we deal with the effect of a quasi-periodic perturbation (with only one frequency) on this cascade. Let us call $\varepsilon$ the perturbing parameter. It is known that, if $\varepsilon$ is small enough, the superstable periodic orbits of the unperturbed map become attracting invariant curves (depending on $\alpha$ and $\varepsilon$) of the perturbed system. In this article we focus on the reducibility of these invariant curves.
The paper shows that, under generic conditions, there are both reducible and non-reducible invariant curves depending on the values of $\alpha$ and $\varepsilon$. The curves in the space $(\alpha,\varepsilon)$ separating the reducible (or the non-reducible) regions are called reducibility loss bifurcation curves. If the map satifies an extra condition (condition satisfied by the quasi-periodically forced logistic map) then we show that, from each superattracting point of the unperturbed map, two reducibility loss bifurcation curves are born. This means that these curves are present for all the cascade.
Dynamics of random selfmaps of surfaces with boundary
Seung Won Kim and P. Christopher Staecker
2014, 34(2): 599-611 doi: 10.3934/dcds.2014.34.599 +[Abstract](89) +[PDF](340.8KB)
We use Wagner's algorithm to estimate the number of periodic points of certain selfmaps on compact surfaces with boundary. When counting according to homotopy classes, we can use the asymptotic density to measure the size of sets of selfmaps. In this sense, we show that ``almost all'' such selfmaps have periodic points of every period, and that in fact the number of periodic points of period $n$ grows exponentially in $n$. We further discuss this exponential growth rate and the topological and fundamental-group entropies of these maps.
Since our approach is via the Nielsen number, which is homotopy and homotopy-type invariant, our results hold for selfmaps of any space which has the homotopy type of a compact surface with boundary.
Steady state analysis for a relaxed cross diffusion model
Thomas Lepoutre and Salomé Martínez
2014, 34(2): 613-633 doi: 10.3934/dcds.2014.34.613 +[Abstract](194) +[PDF](566.8KB)
In this article we study the existence the existence of nonconstant steady state solutions for the following relaxed cross-diffusion system $$ \left\lbrace\begin{array}{l} \partial_t u-\Delta[a(\tilde v)u]=0,\;\text{ in } (0,\infty)\times\Omega,\\ \partial_t v-\Delta[b(\tilde u)v]=0,\;\text{ in } (0,\infty)\times\Omega,\\ -\delta\Delta \tilde u+\tilde u=u,\;\text{ in }\Omega,\\ -\delta\Delta \tilde v+\tilde v=v,\;\text{ in }\Omega,\\ \partial_n u=\partial_n v=\partial\tilde u=\partial_n\tilde u=0,\;\text{ on } (0,\infty) \times \partial\Omega, \end{array}\right. $$ with $\Omega$ a bounded smooth domain, $n$ the outer unit normal to $\partial\Omega$, $\delta>0$ denotes the relaxation parameter. The functions $a(\tilde v)$, $b(\tilde u)$ account for nonlinear cross-diffusion, being $a(\tilde v)=1+{\tilde v}^\gamma$, $b(\tilde u)=1+{\tilde u}^\eta$ with $\gamma, \eta >1$ a model example. We give conditions for the stability of constant steady state solutions and we prove that under suitable conditions Turing patterns arise considering $\delta$ as a bifurcation parameter.
Generic property of irregular sets in systems satisfying the specification property
Jinjun Li and Min Wu
2014, 34(2): 635-645 doi: 10.3934/dcds.2014.34.635 +[Abstract](252) +[PDF](346.3KB)
Let $f$ be a continuous map on a compact metric space. In this paper, under the hypothesis that $f$ satisfies the specification property, we prove that the set consisting of those points for which the Birkhoff ergodic average does not exist is either residual or empty.
Topological entropy by unit length for the Ginzburg-Landau equation on the line
N. Maaroufi
2014, 34(2): 647-662 doi: 10.3934/dcds.2014.34.647 +[Abstract](195) +[PDF](421.0KB)
In this paper we study the notion of topological entropy by unit length for the dynamical system given by the complex Ginzburg-Landau equation on the line (CGL). This equation has a global attractor $\mathcal{A}$ that attracts all the trajectories. We first prove the existence of the topological entropy by unit length for the topological dynamical system $(\mathcal{A},S)$ in a Hilbert space framework, where $S(t)$ is the semi-flow defined by CGL. Next we show that this topological entropy by unit length is bounded by the product of the upper fractal dimension per unit length (see [10]) with the expansion rate. Finally, we prove that this quantity is invariant for all $H^k$ metrics ($k\geq 0$).
Non-normal numbers with respect to Markov partitions
Manfred G. Madritsch
2014, 34(2): 663-676 doi: 10.3934/dcds.2014.34.663 +[Abstract](110) +[PDF](369.0KB)
We call a real number normal if for any block of digits the asymptotic frequency of this block in the $N$-adic expansion equals the expected one. In the present paper we consider non-normal numbers and, in particular, essentially and extremely non-normal numbers. We call a real number essentially non-normal if for each single digit there exists no asymptotic frequency of its occurrence. Furthermore we call a real number extremely non-normal if all possible probability vectors are accumulation points of the sequence of frequency vectors. Our aim now is to extend and generalize these results to Markov partitions.
Gevrey normal forms for nilpotent contact points of order two
P. De Maesschalck
2014, 34(2): 677-688 doi: 10.3934/dcds.2014.34.677 +[Abstract](211) +[PDF](355.5KB)
This paper deals with normal forms about contact points (`turning points') of nilpotent type that one frequently encounters in the study of planar slow-fast systems. In case the contact point of an analytic slow-fast vector field is of order two, we prove that the slow-fast vector field can locally be written as a slow-fast Liénard equation up to exponentially small error. The proof is based on the use of Gevrey asymptotics. Furthermore, for slow-fast jump points, we eliminate the exponentially small remainder.
Invariant Tori for Benjamin-Ono Equation with Unbounded quasi-periodically forced Perturbation
Lufang Mi and Kangkang Zhang
2014, 34(2): 689-707 doi: 10.3934/dcds.2014.34.689 +[Abstract](141) +[PDF](277.8KB)
In this paper, we consider the non-autonomous Benjamin-Ono equation $$u_t+\mathscr{H}u_{xx}- uu_x- (F(\omega t,x,u))_x=0$$ under periodic boundary conditions. Using an abstract infinite dimensional KAM theorem dealing with unbounded perturbation vector-field and partial Birkhoff normal form, we will prove that there exists a Cantorian branch of KAM tori and thus many time quasi-periodic solutions for the above equation.
On the derivative of the $\alpha$-Farey-Minkowski function
Sara Munday
2014, 34(2): 709-732 doi: 10.3934/dcds.2014.34.709 +[Abstract](134) +[PDF](295.0KB)
In this paper we study the family of $\alpha$-Farey-Minkowski functions $\theta_\alpha$, for an arbitrary countable partition $\alpha$ of the unit interval with atoms which accumulate only at the origin, which are the conjugating homeomorphisms between each of the $\alpha$-Farey systems and the tent map. We first show that each function $\theta_\alpha$ is singular with respect to the Lebesgue measure and then demonstrate that the unit interval can be written as the disjoint union of the following three sets: $\Theta_0 : = \{x \in [0,1] : \theta_\alpha'(x)=0\}, \Theta_{\infty} : = \{ x \in [0,1] : \theta_\alpha'(x)=\infty \} $ and $\Theta_\sim : = \{ x \in [0,1] : \theta_\alpha'(x)\ does\ not\ exist \} $. The main result is that \[ \dim_{\mathrm{H}}(\Theta_\infty)=\dim_{\mathrm{H}}(\Theta_\sim)=\sigma_\alpha(\log2)<\dim_{\mathrm{H}}(\Theta_0)=1, \] where $\sigma_\alpha(\log2)$ denotes the Hausdorff dimension of the level set $\{x\in [0,1]:\Lambda(F_\alpha, x)=\log2\}$ and $\Lambda(F_\alpha, x)$ is the Lyapunov exponent of the map $F_\alpha$ at the point $x$. The proof of the theorem employs the multifractal formalism for $\alpha$-Farey systems.
The defocusing $\dot{H}^{1/2}$-critical NLS in high dimensions
Jason Murphy
2014, 34(2): 733-748 doi: 10.3934/dcds.2014.34.733 +[Abstract](89) +[PDF](459.5KB)
We consider the defocusing $\dot{H}^{1/2}$-critical nonlinear Schrödinger equation in dimensions $d\geq 4.$ In the spirit of Kenig and Merle [10], we combine a concentration-compactness approach with the Lin--Strauss Morawetz inequality to prove that if a solution $u$ is bounded in $\dot{H}^{1/2}$ throughout its lifespan, then $u$ is global and scatters.
Goldstein-Wentzell boundary conditions: Recent results with Jerry and Gisèle Goldstein
Silvia Romanelli
2014, 34(2): 749-760 doi: 10.3934/dcds.2014.34.749 +[Abstract](106) +[PDF](139.7KB)
We present a survey of recent results concerning heat and telegraph equations, equipped with Goldstein-Wentzell boundary conditions (already known as general Wentzell boundary conditions). We focus on the generation of analytic semigroups and continuous dependence of the solutions of the associated Cauchy problems from the boundary conditions.
Semi-linear elliptic and elliptic-parabolic equations with Wentzell boundary conditions and $L^1$-data
Paul Sacks and Mahamadi Warma
2014, 34(2): 761-787 doi: 10.3934/dcds.2014.34.761 +[Abstract](107) +[PDF](509.5KB)
Let $Ω\subset\mathbb{R}^N$ ($N\ge 2$) be a bounded domain with a boundary $∂Ω$ of class $C^2$ and let $\alpha,\beta$ be maximal monotone graphs in $\mathbb{R}^2$ satisfying $\alpha(0)\cap\beta(0)\ni 0$. Given $f\in L^1(Ω)$ and $g\in L^1(∂Ω)$, we characterize the existence and uniqueness of weak solutions to the semi-linear elliptic equation $-\Delta u+\alpha(u)\ni f$ in $Ω$ with the nonlinear general Wentzell boundary conditions $-\Delta_{\Gamma} u+\frac{\partial u}{\partial\nu}+\beta(u)\ni g$ on $∂Ω$. We also show the well-posedness of the associated parabolic problem on the Banach space $L^1(Ω)\times L^1(∂Ω)$.
Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source
Liangchen Wang, Yuhuan Li and Chunlai Mu
2014, 34(2): 789-802 doi: 10.3934/dcds.2014.34.789 +[Abstract](163) +[PDF](432.0KB)
This paper deals with the global existence and boundedness of the solutions for the chemotaxis system with logistic source \begin{eqnarray*} \left\{ \begin{array}{llll} u_t=\nabla\cdot(\phi(u)\nabla u)-\nabla\cdot(\varphi(u)\nabla v)+f(u),\quad &x\in \Omega,\quad t>0,\\ v_t=\Delta v-v+u,\quad &x\in\Omega,\quad t>0,\\ \end{array} \right. \end{eqnarray*} under homogeneous Neumann boundary conditions in a convex smooth bounded domain $\Omega\subset \mathbb{R}^n (n\geq2),$ with non-negative initial data $u_0\in C^0(\overline{\Omega})$ and $v_0\in W^{1,\theta}{(\Omega)}$ (with some $\theta>n$). The nonlinearities $\phi$ and $\varphi$ are assumed to generalize the prototypes \begin{eqnarray*} \phi(u)=(u+1)^{-\alpha},\,\,\,\,\,\, \varphi(u)=u(u+1)^{\beta-1} \end{eqnarray*} with $\alpha\in \mathbb{R}$ and $\beta\in \mathbb{R}$. $f(u)$ is a smooth function generalizing the logistic function \begin{eqnarray*} f(u)=ru-bu^\gamma,\,\,\,\,\,\, u\geq0,\,\,\text{with}\,\, r\geq0,\,\,b>0\,\,\text{and}\,\,\gamma>1. \end{eqnarray*} It is proved that the corresponding initial-boundary value problem possesses a unique global classical solution that is uniformly bounded provided that some technical conditions are fulfilled.
Local Well-posedness and Persistence Property for the Generalized Novikov Equation
Yongye Zhao, Yongsheng Li and Wei Yan
2014, 34(2): 803-820 doi: 10.3934/dcds.2014.34.803 +[Abstract](139) +[PDF](454.8KB)
In this paper, we study the generalized Novikov equation which describes the motion of shallow water waves. By using the Littlewood-Paley decomposition and nonhomogeneous Besov spaces, we prove that the Cauchy problem for the generalized Novikov equation is locally well-posed in Besov space $B_{p,r}^{s}$ with $1\leq p, r\leq +\infty$ and $s>{\rm max}\{1+\frac{1}{p},\frac{3}{2}\}$. We also show the persistence property of the strong solutions which implies that the solution decays at infinity in the spatial variable provided that the initial function does.
A nonlinear diffusion problem arising in population genetics
Peng Zhou, Jiang Yu and Dongmei Xiao
2014, 34(2): 821-841 doi: 10.3934/dcds.2014.34.821 +[Abstract](143) +[PDF](310.2KB)
In this paper we investigate a nonlinear diffusion equation with the Neumann boundary condition, which was proposed by Nagylaki in [19] to describe the evolution of two types of genes in population genetics. For such a model, we obtain the existence of nontrivial solutions and the limiting profile of such solutions as the diffusion rate $d\rightarrow0$ or $d\rightarrow\infty$. Our results show that as $d\rightarrow0$, the location of nontrivial solutions relative to trivial solutions plays a very important role for the existence and shape of limiting profile. In particular, an example is given to illustrate that the limiting profile does not exist for some nontrivial solutions. Moreover, to better understand the dynamics of this model, we analyze the stability and bifurcation of solutions. These conclusions provide a different angle to understand that obtained in [17,21].
Well-posedness, blow-up phenomena and global existence for the generalized $b$-equation with higher-order nonlinearities and weak dissipation
Shouming Zhou, Chunlai Mu and Liangchen Wang
2014, 34(2): 843-867 doi: 10.3934/dcds.2014.34.843 +[Abstract](199) +[PDF](541.0KB)
This paper deals with the Cauchy problem for a weakly dissipative shallow water equation with high-order nonlinearities $y_{t}+u^{m+1}y_{x}+bu^{m}u_{x}y+\lambda y=0$, where $\lambda,b$ are constants and $m\in\mathbb{N}$, the notation $y:= (1-\partial_x^2) u$, which includes the famous $b$-equation and Novikov equations as special cases. The local well-posedness of solutions for the Cauchy problem in Besov space $B^s_{p,r} $ with $1\leq p,r \leq +\infty$ and $s>\max\{1+\frac{1}{p},\frac{3}{2}\}$ is obtained. Under some assumptions, the existence and uniqueness of the global solutions to the equation are shown, and conditions that lead to the development of singularities in finite time for the solutions are acquired, moreover, the propagation behaviors of compactly supported solutions are also established. Finally, the weak solution and analytic solution for the equation are considered.
Topological quasi-stability of partially hyperbolic diffeomorphisms under random perturbations
Yujun Zhu
2014, 34(2): 869-882 doi: 10.3934/dcds.2014.34.869 +[Abstract](103) +[PDF](384.9KB)
In this paper, $C^0$ random perturbations of a partially hyperbolic diffeomorphism are considered. It is shown that a partially hyperbolic diffeomorphism is quasi-stable under such perturbations.
2016 Impact Factor: 1.099
Email Alert
[Back to Top] |
edcebb5afb40b3b6 | Take the 2-minute tour ×
In quantum mechanics when we talk about the wave nature of particles are we referring in fact to the wave function? Does the wave function describes the probability of finding a particle (ex: photons) at some location? So do the "waves" describe probabilities just the way in classical physics the electromagnetic waves describe the perturbations of the electric and magnetic fields?
share|improve this question
4 Answers 4
up vote 3 down vote accepted
No, because the wavefunctions are not waves in space. They are waves in enormous high-dimensional spaces of possibilities. If you have two particles, the wavefunction is waving in 6 dimensions (the two positions of the two particles make a six dimensional space of possibilities), if you have three particles, the wavefunction is in 9 dimensions. So it is always wrong to think of it as a wave in space, like a field.
There is a field which obeys the Schrodinger equation, but this classical field is a classical wave, like E and B, which describes many coherent bosons in the same quantum state all moving together, like a superfluid or a Bose-Einstein condensate.
share|improve this answer
I wasn't thinking of it as a wave in space. I was making an analogy. So the wave-function of a photon describes the probability of finding a particle at a certain location? If not what does it describe? – Buzai Andras Jul 2 '12 at 22:28
It describes the probability of finding several particles in several locations. It's a wave over possible universes, not a wave over one particle's position (unless the system is one particle). – Ron Maimon Jul 3 '12 at 2:23
The key point to understand about wave/particle duality is that when we describe some system (e.g. an electron) as a wave what we mean is that it interacts like a wave. Similarly when we describe it as a particle we mean it interacts like a particle. The electron itself is neither a wave or a particle: it's, well, an electron.
The other point is that we can describe our system using various mathematical approaches. When you say wavefunction I'd guess you're thinking about the solutions to the Schrödinger equation. The Schrödinger equation is basically a wave equation so it works very well when describing wave-like interactions. It can be used to describe particle-like interactions, but this gets messy because you have to model your particle as the superposition of infinitely many waves.
You're quite correct that the wavefunction describes the probability of finding the particle, but the wavefunction is not simply a wave like a sine wave.
share|improve this answer
Usual quantum mechanics is roughly based on following principles : 1. For any given ("small") physical system $S$ there is associated a set $H_S$ of physical states. 2. At any instant of time $t$ system $S$ exists in some state $a_t\in H_S$. Time evolution of this state is governed by a first order (in time) differential equation called Schrodinger equation. 3. State $a_t$ carries all information about the system that one can hope to get. This information is probabilistic and depends upon what 'observable' you want to measure. In particular if you want to measure position you will see a particle, if you want to measure momentum you will see a wave. However words could sometimes be misleading; so you should consult some good text e.g. Cohen, Tannoudji Volume 1.
Edit : "Wave" as this term is used in QM does not mean ordinary physical wave but mathematical "probability wave". So when we talk about wave nature of particle what we are referring to is the position space probability wave (or wave function ) associated with it. So you are right :-)
share|improve this answer
Does the wave nature of a particle refer to the wave function?
No, and for a very simple reason. The wave nature of a quantum particle refers to the empirical evidence - observations - that quantum particles, like the electron and the photon, exhibit classical wave like properties such as interference and diffraction.
The wave function of a quantum particle is part of a mathematical model that, in the non-relativistic limit, accurately predicts both the wave like and particle like observations. The wave function's magnitude squared is interpreted as a probability density.
share|improve this answer
Your Answer
|
4aebb1eef85edbd8 | In 2012, Mathematician Ian Stewart came out with an excellent and deeply researched book titled "In Pursuit of the Unknown: 17 Equations That Changed the World."
His book takes a look at the most pivotal equations of all time, and puts them in a human, rather than technical context.
"Equations definitely can be dull, and they can seem complicated, but that’s because they are often presented in a dull and complicated way," Stewart told Business Insider. "I have an advantage over school math teachers: I'm not trying to show you how to do the sums yourself."
He explained that anyone can "appreciate the beauty and importance of equations without knowing how to solve them ... The intention is to locate them in their cultural and human context, and pull back the veil on their hidden effects on history."
Stewart continued that "equations are a vital part of our culture. The stories behind them — the people who discovered or invented them and the periods in which they lived — are fascinating."
Here are 17 equations that have changed the world:
The Pythagorean Theorem
Image: Business Insider
What does it mean? The square of the hypotenuse of a right triangle is equal to the sum of the squares of its legs.
History: Though attributed to Pythagoras, it is not certain that he was the first person to prove it. The first clear proof came from Euclid, and it is possible the concept was known 1,000 years before Pythoragas by the Babylonians.
Importance: The equation is at the core of much of geometry, links it with algebra, and is the foundation of trigonometry. Without it, accurate surveying, mapmaking, and navigation would be impossible.
In terms of pure math, the Pythagorean Theorem defines normal, Euclidean plane geometry. For example, a right triangle drawn on the surface of a sphere like the Earth doesn't necessarily satisfy the theorem.
Modern use: Triangulation is used to this day to pinpoint relative location for GPS navigation.
The logarithm and its identities
Image: Business Insider
What does it mean? You can multiply numbers by adding related numbers.
History: The initial concept was discovered by the Scottish Laird John Napier of Merchiston in an effort to make the multiplication of large numbers, then incredibly tedious and time consuming, easier and faster. It was later refined by Henry Briggs to make reference tables easier to calculate and more useful.
Importance: Logarithms were revolutionary, making calculation faster and more accurate for engineers and astronomers. That's less important with the advent of computers, but they're still an essential to scientists.
Modern use: Logarithms, and the related exponential functions, are used to model everything from compound interest to biological growth to radioactive decay.
Image: Business Insider
What does it mean? Allows the calculation of an instantaneous rate of change.
History: Calculus as we currently know it was described around the same time in the late 17th century by Isaac Newton and Gottfried Leibniz. There was a lengthy debate over plagiarism and priority which may never be resolved. We use the leaps of logic and parts of the notation of both men today.
Importance: According to Stewart, "More than any other mathematical technique, it has created the modern world." Calculus is essential in our understanding of how to measure solids, curves, and areas. It is the foundation of many natural laws, and the source of differential equations.
Modern use: Any mathematical problem where an optimal solution is required. Essential to medicine, economics, physics, engineering, and computer science.
Newton's universal law of gravitation
Image: Business Insider
What does it mean? Calculates the force of gravity between two objects.
History: Isaac Newton derived his laws based on earlier astronomical and mathematical work by Johannes Kepler. He also used, and possibly plagiarized the work of Robert Hooke.
Importance: Used techniques of calculus to describe how the world works. Even though it was later supplanted by Einstein's theory of relativity, it is still essential for a practical description of how objects in space, like stars, planets, and human-made spacecraft, interact with each other. We use it to this day to design orbits for satellites and probes.
Philosophically, Newton's law is important because it describes how gravity works everywhere, from a ball falling to the ground on Earth to the evolution of galaxies and the universe as a whole. While we take the idea of universal laws for granted today, in earlier eras the idea that the terrestrial and celestial worlds shared the same properties was revolutionary.
Modern use: Although, as mentioned above, for practical uses Newton's law has been augmented by Einstein's theories, the basic idea of Newtonian gravity is still a useful approximation for how things behave in space.
Complex numbers
Image: Business Insider
What does it mean? Mathematicians can expand our idea of what numbers are by introducing the square roots of negative numbers.
History: Imaginary numbers were originally posited by famed gambler/mathematician Girolamo Cardano, then expanded by Rafael Bombelli and John Wallis. They still existed as a peculiar, but essential problem in math until William Hamilton described this definition.
The imaginary and complex numbers are mathematically very elegant. Algebra works perfectly the way we want it to — any equation has a complex number solution, a situation that is not true for the real numbers : x2 + 4 = 0 has no real number solution, but it does have a complex solution: the square root of -4, or 2i. Calculus can be extended to the complex numbers, and by doing so, we find some amazing symmetries and properties of these numbers.
Importance: According to Stewart ".... most modern technology, from electric lighting to digital cameras could not have been invented without them." The extension of calculus to the complex numbers, a branch of math called "complex analysis," is essential to understanding electrical systems and a variety of modern data processing algorithms.
Modern use: Used broadly in electrical engineering and mathematical theory.
Euler's formula for polyhedra
Image: Business Insider
What does it mean? Describes a numerical relationship that is true of all solid shapes of a particular type.
History: This was developed by the great 18th century mathematician Leonhard Euler. Polyhedra are the three-dimensional versions of polygons, like the cube to the right. The corners of a polyhedron are called its vertices, the lines connecting the vertices are its edges, and the polygons covering it are its faces.
A cube has 8 vertices, 12 edges, and 6 faces. If I add the vertices and faces together, and subtract the edges, I get 8 + 6 - 12 = 2.
Euler's formula states that, as long as your polyhedron is somewhat well behaved, if you add the vertices and faces together, and subtract the edges, you will always get 2. This will be true whether your polyhedron has 4, 8, 12, 20, or any number of faces.
Importance: Fundamental to the development of topology, which extends geometry to any continuous surface.
Modern use: Topology is used to understand the behavior and function of DNA, and it is an underlying part of the mathematical tool kit used to understand networks like social media and the internet.
The normal distribution
Image: Business Insider
What does it mean? Defines the standard normal distribution, a bell shaped curve in which the probability of observing a point is greatest near the average, and declines rapidly as one moves away.
History: The initial work was by Blaise Pascal, but the distribution came into its own with Bernoulli. The bell curve as we currently comes from Belgian mathematician Adolphe Quetelet.
Importance: The equation is the foundation of modern statistics. Science and social science would not exist in their current form without it. Statistical experiment design relies on the properties of the normal curve, and how those properties relate to errors that can occur when taking a random sample.
Modern use: Used to determine whether drugs are sufficiently effective in clinical trials.
The wave equation
Image: Business Insider
What does it mean? A differential equation that describes the behavior of waves, like the behavior of a vibrating violin string.
History: The mathematicians Daniel Bournoulli and Jean D'Alembert were the first to describe this relationship in the 18th century, albeit in slightly different ways.
Importance: The behavior of waves generalizes to the way sound works, how earthquakes happen, and the behavior of the ocean.
The techniques developed to solve the wave equation have been very useful in solving similar types of equations as well.
Modern use: Oil companies set off explosives, then read data from the ensuing sound waves to predict geological formations.
The Fourier transform
Image: Business Insider
What does it mean? Describes patterns in time as a function of frequency.
History: Joseph Fourier discovered the equation, which extended from his famous solution to a differential equation describing how heat flows, and the previously described wave equation.
Importance: The equation allows for complex wave patterns, like music, speech, or images, to be broken up, cleaned up, and analyzed. This is essential in many types of signal analysis.
Modern use: Used to compress information for the JPEG image format and discover the structure of molecules.
The Navier-Stokes equations
Image: Business Insider
What does it mean? The Navier-Stokes equations are the fundamental physical equation that describes how fluids work. The left side is the acceleration of a small amount of fluid, the right indicates the forces that act upon it.
History: Leonhard Euler made the first attempt at modeling fluid movement. French engineer Claude-Louis Navier and Irish mathematician George Stokes made the leap to the model still used today.
Importance: Once computers became powerful enough to approximately solve this equation, it opened up a complex and very useful field of physics. It is particularly useful in making vehicles more aerodynamic.
While we can use modern computers to make practical approximate simulations of fluid dynamics that are useful in engineering, finding a mathematically exact solution (or even knowing whether or not an exact solution exists in all cases) is still an open question, one whose answer is attached to a million-dollar prize.
Modern use: Among other things, allowed for the development of modern passenger jets.
Maxwell's equations
Image: Business Insider
What does it mean? Maps out the relationship between electric and magnetic fields.
History: Michael Faraday did pioneering work on the connection between electricity and magnetism, and James Clerk Maxwell translated it into these equations. Maxwell's equations were for classical electromagnetism what Newton's laws of motion were for classical mechanics.
Importance: Helped understand electromagnetic waves, helping to create most modern electrical and electronic technology.
Modern use: Radar, television, and modern communications.
Second law of thermodynamics
Image: Business Insider
What does it mean? Energy and heat dissipate over time.
History: Sadi Carnot first posited that nature does not have reversible processes. Mathematician Ludwig Boltzmann extended the law, and William Thomson formally stated it.
Importance: Essential to our understanding of energy and the universe via the concept of entropy.Thermodynamic entropy is, roughly speaking, a measure of how disordered a system is. A system that starts out in an ordered, uneven state — say, a hot region next to a cold region — will always tend to even out, with heat flowing from the hot area to the cold area until evenly distributed.
Modern use: Thermodynamics underlies much of our understanding of chemistry and is essential in building any kind of power plant or engine.
Einstein's theory of relativity
Image: Business Insider
What does it mean? Energy and matter are two sides of the same coin.
History: The genesis of Einstein's equation was an experiment by Albert Michelson and Edward Morley that proved light did not move in a Newtonian manner in comparison to changing frames of reference. Einstein followed up on this insight with his famous papers on special relativity (1905) and general relativity (1915).
Importance: Probably the most famous equation in history. Completely changed our view of matter and reality.
Modern use: Helped lead to nuclear weapons, and if GPS didn't account for it, your directions would be off thousands of yards.
The Schrödinger equation
Image: Business Insider
What does it mean? This is the main equation in quantum physics. Models matter as a wave, rather than a particle.
History: Louis-Victor de Broglie pinpointed the dual nature of matter in 1924. The equation you see was derived by Erwin Schrodinger in 1927, building off of the work of physicists like Werner Heisenberg. It describes the way subatomic particles and atoms evolve over time.
Importance: Revolutionized the view of physics at small scales. The insight that particles at that level exist at a range of probable states was revolutionary.
Modern use: Quantum mechanics is necessary for most modern technology — nuclear power, semiconductor-based computers, and lasers are all built around quantum phenomena.
Shannon's information theory
Image: Business Insider
What does it mean? Estimates the amount of data in a piece of code by the probabilities of its component symbols.
History: Developed by Bell Labs engineer Claude Shannon in the years after World War 2.
Modern use: Shannon's entropy measure launched the mathematical study of information, and his results are central to how we communicate over networks today.
The logistic model for population growth
Image: Business Insider
What does it mean? Estimates the change in a population of creatures across generations with limited resources. Importantly, this equation can lead to chaotic behavior.
History: Robert May was the first to point out that this model of population growth could produce chaos in 1975. Important work by mathematicians Vladimir Arnold and Stephen Smale helped with the realization that chaos is a consequence of differential equations.
Importance: Helped in the development of chaos theory, which has completely changed our understanding of the way that natural systems work.
Modern use: Used to model earthquakes and forecast the weather.
The Black–Scholes model
Image: Business Insider
What does it mean? Prices a derivative based on the assumption that it is riskless and that there is no arbitrage opportunity when it is priced correctly.
History: Developed by Fischer Black and Myron Scholes, then expanded by Robert Merton. The latter two won the 1997 Nobel Prize in Economics for the discovery.
Importance: Helped create the now multi-trillion dollar derivatives market. It is argued that improper use of the formula (and its descendants) contributed to the financial crisis. In particular, the equation maintains several assumptions that do not hold true in real financial markets.
Modern use: Variants are still used to price most derivatives, even after the financial crisis.
More from Business Insider: |
466cf99d897dac80 | History of mathematical notation
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The history of mathematical notation[1] includes the commencement, progress, and cultural diffusion of mathematical symbols and the conflict of the methods of notation confronted in a notation's move to popularity or inconspicuousness. Mathematical notation[2] comprises the symbols used to write mathematical equations and formulas. Notation generally implies a set of well-defined representations of quantities and symbols operators.[3] The history includes Hindu-Arabic numerals, letters from the Roman, Greek, Hebrew, and German alphabets, and a host of symbols invented by mathematicians over the past several centuries.
The development of mathematical notation can be divided in stages.[4][5] The "rhetorical" stage is where calculations are performed by words and no symbols are used.[6] The "syncopated" stage is where frequently used operations and quantities are represented by symbolic syntactical abbreviations. From ancient times through the post-classical age,[note 1] bursts of mathematical creativity were often followed by centuries of stagnation. As the early modern age opened and the worldwide spread of knowledge began, written examples of mathematical developments came to light. The "symbolic" stage is where comprehensive systems of notation supersede rhetoric. Beginning in Italy in the 16th century, new mathematical developments, interacting with new scientific discoveries, were made at an increasing pace that continues through the present day. This symbolic system was in use by medieval Indian mathematicians and in Europe since the middle of the 17th century,[7] and has continued to develop in the contemporary era.
The area of study known as the history of mathematics is primarily an investigation into the origin of discoveries in mathematics and, the focus here, the investigation into the mathematical methods and notation of the past.
Rhetorical stage[edit]
Although the history commences with that of the Ionian schools, there is no doubt that those Ancient Greeks who paid attention to it were largely indebted to the previous investigations of the Ancient Egyptians and Ancient Phoenicians. Numerical notation distinctive feature, i.e. symbols having local as well as intrinsic values (arithmetic), implies a state of civilization at the period of its invention. Our knowledge of the mathematical attainments of these early peoples, to which this section is devoted, is imperfect and the following brief notes be regarded as a summary of the conclusions which seem most probable, and the history of mathematics begins with the symbolic sections.
There can be no doubt that most early peoples which have left records knew something of numeration and mechanics, and that a few were also acquainted with the elements of land-surveying. In particular, the Egyptians paid attention to geometry and numbers, and the Phoenicians to practical arithmetic, book-keeping, navigation, and land-surveying. The results attained by these people seem to have been accessible, under certain conditions, to travelers. It is probable that the knowledge of the Egyptians and Phoenicians was largely the result of observation and measurement, and represented the accumulated experience of many ages.
Beginning of notation[edit]
Written mathematics began with numbers expressed as tally marks, with each tally representing a single unit. The numerical symbols consisted probably of strokes or notches cut in wood or stone, and intelligible alike to all nations.[note 2] For example, one notch in a bone represented one animal, or person, or anything else. The peoples with whom the Greeks of Asia Minor (amongst whom notation in western history begins) were likely to have come into frequent contact were those inhabiting the eastern littoral of the Mediterranean: and Greek tradition uniformly assigned the special development of geometry to the Egyptians, and that of the science of numbers[note 3] either to the Egyptians or to the Phoenicians.
The Ancient Egyptians had a symbolic notation which was the numeration by Hieroglyphics.[8][9] The Egyptian mathematics had a symbol for one, ten, one-hundred, one-thousand, ten-thousand, one-hundred-thousand, and one-million. Smaller digits were placed on the left of the number, as they are in Hindu-Arabic numerals. Later, the Egyptians used hieratic instead of hieroglyphic script to show numbers. Hieratic was more like cursive and replaced several groups of symbols with individual ones. For example, the four vertical lines used to represent four were replaced by a single horizontal line. This is found in the Rhind Mathematical Papyrus (c. 2000–1800 BC) and the Moscow Mathematical Papyrus (c. 1890 BC). The system the Egyptians used was discovered and modified by many other civilizations in the Mediterranean. The Egyptians also had symbols for basic operations: legs going forward represented addition, and legs walking backward to represent subtraction.
The Mesopotamians had symbols for each power of ten.[10] Later, they wrote their numbers in almost exactly the same way done in modern times. Instead of having symbols for each power of ten, they would just put the coefficient of that number. Each digit was at separated by only a space, but by the time of Alexander the Great, they had created a symbol that represented zero and was a placeholder. The Mesopotamians also used a sexagesimal system, that is base sixty. It is this system that is used in modern times when measuring time and angles. Babylonian mathematics is derived from more than 400 clay tablets unearthed since the 1850s.[11] Written in Cuneiform script, tablets were inscribed whilst the clay was moist, and baked hard in an oven or by the heat of the sun. Some of these appear to be graded homework. The earliest evidence of written mathematics dates back to the ancient Sumerians and the system of metrology from 3000 BC. From around 2500 BC onwards, the Sumerians wrote multiplication tables on clay tablets and dealt with geometrical exercises and division problems. The earliest traces of the Babylonian numerals also date back to this period.[12]
The majority of Mesopotamian clay tablets date from 1800 to 1600 BC, and cover topics which include fractions, algebra, quadratic and cubic equations, and the calculation of regular reciprocal pairs.[13] The tablets also include multiplication tables and methods for solving linear and quadratic equations. The Babylonian tablet YBC 7289 gives an approximation of √2 accurate to five decimal places. Babylonian mathematics were written using a sexagesimal (base-60) numeral system. From this derives the modern day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 (60 x 6) degrees in a circle, as well as the use of minutes and seconds of arc to denote fractions of a degree. Babylonian advances in mathematics were facilitated by the fact that 60 has many divisors: the reciprocal of any integer which is a multiple of divisors of 60 has a finite expansion in base 60. (In decimal arithmetic, only reciprocals of multiples of 2 and 5 have finite decimal expansions.) Also, unlike the Egyptians, Greeks, and Romans, the Babylonians had a true place-value system, where digits written in the left column represented larger values, much as in the decimal system. They lacked, however, an equivalent of the decimal point, and so the place value of a symbol often had to be inferred from the context.
Syncopated stage[edit]
Archimedes Thoughtful
by Fetti (1620)
The last words attributed to Archimedes are "Do not disturb my circles",[note 4] a reference to the circles in the mathematical drawing that he was studying when disturbed by the Roman soldier.
The history of mathematics cannot with certainty be traced back to any school or period before that of the Ionian Greeks, but the subsequent history may be divided into periods, the distinctions between which are tolerably well marked. Greek mathematics, which originated with the study of geometry, tended from its commencement to be deductive and scientific. Since the fourth century AD, Pythagoras has commonly been given credit for discovering the Pythagorean theorem, a theorem in geometry that states that in a right-angled triangle the area of the square on the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares of the other two sides.[note 5] The ancient mathematical texts are available with the prior mentioned Ancient Egyptians notation and with Plimpton 322 (Babylonian mathematics c. 1900 BC). The study of mathematics as a subject in its own right begins in the 6th century BC with the Pythagoreans, who coined the term "mathematics" from the ancient Greek μάθημα (mathema), meaning "subject of instruction".[14]
Plato's influence has been especially strong in mathematics and the sciences. He helped to distinguish between pure and applied mathematics by widening the gap between "arithmetic", now called number theory and "logistic", now called arithmetic. Greek mathematics greatly refined the methods (especially through the introduction of deductive reasoning and mathematical rigor in proofs) and expanded the subject matter of mathematics.[15] Aristotle is credited with what later would be called the law of excluded middle.
Abstract Mathematics[16] is what treats of magnitude[note 6] or quantity, absolutely and generally conferred, without regard to any species of particular magnitude, such as Arithmetic and Geometry, In this sense, abstract mathematics is opposed to mixed mathematics; wherein simple and abstract properties, and the relations of quantities primitively considered in mathematics, are applied to sensible objects, and by that means become intermixed with physical considerations; Such are Hydrostatics, Optics, Navigation, &c.[16]
Archimedes is generally considered to be the greatest mathematician of antiquity and one of the greatest of all time.[17][18] He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of pi.[19] He also defined the spiral bearing his name, formulae for the volumes of surfaces of revolution and an ingenious system for expressing very large numbers.
Euclid's Elements
The prop. 31, 32 and 33 of the book of Euclid XI, which is located in vol. 2 of the manuscript, the sheets 207 to - 208 recto.
In the historical development of geometry, the steps in the abstraction of geometry were made by the ancient Greeks. Euclid's Elements being the earliest extant documentation of the axioms of plane geometry— though Proclus tells of an earlier axiomatisation by Hippocrates of Chios.[20] Euclid's Elements (c. 300 BC) is one of the oldest extant Greek mathematical treatises[note 7] and consisted of 13 books written in Alexandria; collecting theorems proven by other mathematicians, supplemented by some original work.[note 8] The document is a successful collection of definitions, postulates (axioms), propositions (theorems and constructions), and mathematical proofs of the propositions. Euclid's first theorem is a lemma that possesses properties of prime numbers. The influential thirteen books cover Euclidean geometry, geometric algebra, and the ancient Greek version of algebraic systems and elementary number theory. It was ubiquitous in the Quadrivium and is instrumental in the development of logic, mathematics, and science.
Diophantus of Alexandria was author of a series of books called Arithmetica, many of which are now lost. These texts deal with solving algebraic equations. Boethius provided a place for mathematics in the curriculum in the 6th century when he coined the term quadrivium to describe the study of arithmetic, geometry, astronomy, and music. He wrote De institutione arithmetica, a free translation from the Greek of Nicomachus's Introduction to Arithmetic; De institutione musica, also derived from Greek sources; and a series of excerpts from Euclid's Elements. His works were theoretical, rather than practical, and were the basis of mathematical study until the recovery of Greek and Arabic mathematical works.[21][22]
Acrophonic and Milesian numeration[edit]
The Greeks employed Attic numeration,[23] which was based on the system of the Egyptians and was later adapted and used by the Romans. Greek numerals one through four were vertical lines, as in the hieroglyphics. The symbol for five was the Greek letter Π (pi), which is the letter of the Greek word for five, pente. Numbers six through nine were pente with vertical lines next to it. Ten was represented by the letter (Δ) of the word for ten, deka, one hundred by the letter from the word for hundred, etc.
The Ionian numeration used their entire alphabet including three archaic letters. The numeral notation of the Greeks, though far less convenient than that now in use, was formed on a perfectly regular and scientific plan,[24] and could be used with tolerable effect as an instrument of calculation, to which purpose the Roman system was totally inapplicable. The Greeks divided the twenty-four letters of their alphabet into three classes, and, by adding another symbol to each class, they had characters to represent the units, tens, and hundreds. (Jean Baptiste Joseph Delambre's Astronomie Ancienne, t. ii.)
Α (α) Β (β) Г (γ) Δ (δ) Ε (ε) Ϝ (ϝ) Z (ζ) H (η) θ (θ) I (ι) K (κ) Λ (λ) Μ (μ) Ν (ν) Ξ (ξ) Ο (ο) Π (π) Ϟ (ϟ) Ρ (ρ) Σ (σ) Τ (τ) Υ (υ) Ф (φ) Χ (χ) Ψ (ψ) Ω (ω) Ϡ (ϡ)
1 2 3 4 5 6 7 8 9 10 20 30 40 50 60 70 80 90 100 200 300 400 500 600 700 800 900
This system appeared in the third century BC, before the letters digamma (Ϝ), koppa (Ϟ), and sampi (Ϡ) became obsolete. When lowercase letters became differentiated from upper case letters, the lower case letters were used as the symbols for notation. Multiples of one thousand were written as the nine numbers with a stroke in front of them: thus one thousand was ",α", two-thousand was ",β", etc. M (for μὐριοι, as in "myriad") was used to multiply numbers by ten thousand. For example, the number 88,888,888 would be written as M,ηωπη*ηωπη[25]
Greek mathematical reasoning was almost entirely geometric (albeit often used to reason about non-geometric subjects such as number theory), and hence the Greeks had no interest in algebraic symbols. The great exception was Diophantus of Alexandria, the great algebraist.[26] His Arithmetica was one of the texts to use symbols in equations. It was not completely symbolic, but was much more so than previous books. An unknown number was called s.[27] The square of s was \Delta^y; the cube was K^y; the fourth power was \Delta^y\Delta; and the fifth power was \Delta K^y.[28][note 9]
Chinese mathematical notation[edit]
Main article: Suzhou numerals
The numbers 0–9 in Chinese huāmǎ (花碼) numerals
The Chinese used numerals that look much like the tally system.[29] Numbers one through four were horizontal lines. Five was an X between two horizontal lines; it looked almost exactly the same as the Roman numeral for ten. Nowadays, the huāmǎ system is only used for displaying prices in Chinese markets or on traditional handwritten invoices.
In the history of the Chinese, there were those who were familiar with the sciences of arithmetic, geometry, mechanics, optics, navigation, and astronomy. Mathematics in China emerged independently by the 11th century BC.[30] It is indeed almost certain that the Chinese were acquainted with several geometrical or rather architectural implements;[note 10] with mechanical machines;[note 11] that they knew of the characteristic property of the magnetic needle; and were aware that astronomical events occurred in cycles. Chinese of that time had made attempts to classify or extend the rules of arithmetic or geometry which they knew, and to explain the causes of the phenomena with which they were acquainted beforehand. The Chinese independently developed very large and negative numbers, decimals, a place value decimal system, a binary system, algebra, geometry, and trigonometry.
Counting rod numerals
Chinese mathematics made early contributions, including a place value system.[31][32] The geometrical theorem known to the ancient Chinese were acquainted was applicable in certain cases (namely the ratio of sides).[note 12] It is that geometrical theorems which can be demonstrated in the quasi-experimental way of superposition were also known to them. In arithmetic their knowledge seems to have been confined to the art of calculation by means of the swan-pan, and the power of expressing the results in writing. Our knowledge of the early attainments of the Chinese, slight though it is, is more complete than in the case of most of their contemporaries. It is thus instructive, and serves to illustrate the fact, that it can be known a nation may possess considerable skill in the applied arts with but our knowledge of the later mathematics on which those arts are founded can be scarce. Knowledge of Chinese mathematics before 254 BC is somewhat fragmentary, and even after this date the manuscript traditions are obscure. Dates centuries before the classical period are generally considered conjectural by Chinese scholars unless accompanied by verified archaeological evidence.
As in other early societies the focus was on astronomy in order to perfect the agricultural calendar, and other practical tasks, and not on establishing formal systems.The Chinese Board of Mathematics duties were confined to the annual preparation of an almanac, the dates and predictions in which it regulated. Ancient Chinese mathematicians did not develop an axiomatic approach, but made advances in algorithm development and algebra. The achievement of Chinese algebra reached its zenith in the 13th century, when Zhu Shijie invented method of four unknowns.
As a result of obvious linguistic and geographic barriers, as well as content, Chinese mathematics and that of the mathematics of the ancient Mediterranean world are presumed to have developed more or less independently up to the time when The Nine Chapters on the Mathematical Art reached its final form, while the Writings on Reckoning and Huainanzi are roughly contemporary with classical Greek mathematics. Some exchange of ideas across Asia through known cultural exchanges from at least Roman times is likely. Frequently, elements of the mathematics of early societies correspond to rudimentary results found later in branches of modern mathematics such as geometry or number theory. The Pythagorean theorem for example, has been attested to the time of the Duke of Zhou. Knowledge of Pascal's triangle has also been shown to have existed in China centuries before Pascal,[33] such as by Shen Kuo.
Modern artist's impression of Shen Kuo.
The state of trigonometry in China slowly began to change and advance during the Song Dynasty (960–1279), where Chinese mathematicians began to express greater emphasis for the need of spherical trigonometry in calendarical science and astronomical calculations.[34] The polymath Chinese scientist, mathematician and official Shen Kuo (1031–1095) used trigonometric functions to solve mathematical problems of chords and arcs.[34] Sal Restivo writes that Shen's work in the lengths of arcs of circles provided the basis for spherical trigonometry developed in the 13th century by the mathematician and astronomer Guo Shoujing (1231–1316).[35] As the historians L. Gauchet and Joseph Needham state, Guo Shoujing used spherical trigonometry in his calculations to improve the calendar system and Chinese astronomy.[34][36] The mathematical science of the Chinese would incorporate the work and teaching of Arab missionaries with knowledge of spherical trigonometry who had come to China in the course of the thirteenth century.
Indian mathematical notation[edit]
Although the origin of our present system of numerical notation is ancient, there is no doubt that it was in use among the Hindus over two thousand years ago. The algebraic notation of the Indian mathematician, Brahmagupta, was syncopated. Addition was indicated by placing the numbers side by side, subtraction by placing a dot over the subtrahend (the number to be subtracted), and division by placing the divisor below the dividend, similar to our notation but without the bar. Multiplication, evolution, and unknown quantities were represented by abbreviations of appropriate terms.[37] The Hindu-Arabic numeral system and the rules for the use of its operations, in use throughout the world today, likely evolved over the course of the first millennium AD in India and was transmitted to the west via Islamic mathematics.[38][39]
Hindu-Arabic numerals and notations[edit]
A page from al-Khwārizmī's Algebra
Despite their name, Arabic numerals actually started in India. The reason for this misnomer is Europeans saw the numerals used in an Arabic book, Concerning the Hindu Art of Reckoning, by Mohommed ibn-Musa al-Khwarizmi. Al-Khwārizmī wrote several important books on the Hindu-Arabic numerals and on methods for solving equations. His book On the Calculation with Hindu Numerals, written about 825, along with the work of Al-Kindi,[note 13] were instrumental in spreading Indian mathematics and Indian numerals to the West. Al-Khwarizmi did not claim the numerals as Arabic, but over several Latin translations, the fact that the numerals were Indian in origin was lost. The word algorithm is derived from the Latinization of Al-Khwārizmī's name, Algoritmi, and the word algebra from the title of one of his works, Al-Kitāb al-mukhtaṣar fī hīsāb al-ğabr wa’l-muqābala (The Compendious Book on Calculation by Completion and Balancing).
Islamic mathematics developed and expanded the mathematics known to Central Asian civilizations.[40] Al-Khwārizmī gave an exhaustive explanation for the algebraic solution of quadratic equations with positive roots,[41] and Al-Khwārizmī was to teach algebra in an elementary form and for its own sake.[42] Al-Khwārizmī also discussed the fundamental method of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. This is the operation which al-Khwārizmī originally described as al-jabr.[43] His algebra was also no longer concerned "with a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study." Al-Khwārizmī also studied an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems."[44]
Al-Karaji, in his treatise al-Fakhri, extends the methodology to incorporate integer powers and integer roots of unknown quantities.[note 14][45] The historian of mathematics, F. Woepcke,[46] praised Al-Karaji for being "the first who introduced the theory of algebraic calculus." Also in the 10th century, Abul Wafa translated the works of Diophantus into Arabic. Ibn al-Haytham would develop analytic geometry. Al-Haytham derived the formula for the sum of the fourth powers, using a method that is readily generalizable for determining the general formula for the sum of any integral powers. Al-Haytham performed an integration in order to find the volume of a paraboloid, and was able to generalize his result for the integrals of polynomials up to the fourth degree.[note 15][47] In the late 11th century, Omar Khayyam would develop algebraic geometry, wrote Discussions of the Difficulties in Euclid,[note 16] and wrote on the general geometric solution to cubic equations. Nasir al-Din Tusi (Nasireddin) made advances in spherical trigonometry. Muslim mathematicians during this period include the addition of the decimal point notation to the Arabic numerals.
Many Greek and Arabic texts on mathematics were then translated into Latin, which led to further development of mathematics in medieval Europe. In the 12th century, scholars traveled to Spain and Sicily seeking scientific Arabic texts, including al-Khwārizmī's[note 17] and the complete text of Euclid's Elements.[note 18][48][49] One of the European books that advocated using the numerals was Liber Abaci, by Leonardo of Pisa, better known as Fibonacci. Liber Abaci is better known for the mathematical problem Fibonacci wrote in it about a population of rabbits. The growth of the population ended up being a Fibonacci sequence, where a term is the sum of the two preceding terms.
Abū al-Hasan ibn Alī al-Qalasādī (1412–1482) was the last major medieval Arab algebraist, who improved on the algebraic notation earlier used by Ibn al-Yāsamīn in the 12th century[citation needed] and, in the Maghreb, by Ibn al-Banna in the 13th century.[50] In contrast to the syncopated notations of their predecessors, Diophantus and Brahmagupta, which lacked symbols for mathematical operations,[51] al-Qalasadi's algebraic notation was the to have symbols for these functions and was thus "the first steps toward the introduction of algebraic symbolism." He represented mathematical symbols using characters from the Arabic alphabet.[50]
Symbolic stage[edit]
Symbols by popular introduction date
integral quabla end of proof Function (mathematics) Complex number Empty set Arrow (symbol) universal quantifier Rational number Integer Line integral Matrix notation Matrix notation logical disjunction dot product cross product Existential quantification Natural number curly brackets Element (mathematics) Aleph number set inclusion Intersection (set theory) Union (set theory) nabla symbol Matrix notation Determinant Absolute value set inclusion Product sign factorial integral part identity sign prime symbol partial differential Proportionality (mathematics) summation inequality signs Division (mathematics) middle dot Colon (punctuation) integral sign differential sign Inequality signs division sign infinity sign percent sign radical symbol Subscript and superscript Inequality (mathematics) radical symbol Proportionality (mathematics) plus-minus sign multiplication sign equals sign Parentheses nth root Plus and minus signs Plus and minus signs Mathematical notation
Early arithmetic and multiplication[edit]
The 1489 use of the plus and minus signs in print.
The 14th century saw the development of new mathematical concepts to investigate a wide range of problems.[52] The two widely used arithmetic symbols are addition and subtraction, + and −. The plus sign was used by 1360 by Nicole Oresme[53][note 19] in his work Algorismus proportionum.[54] It is thought an abbreviation for "et", meaning "and" in Latin, in much the same way the ampersand sign also began as "et". Oresme at the University of Paris and the Italian Giovanni di Casali independently provided graphical demonstrations of the distance covered by a body undergoing uniformly accelerated motion, asserting that the area under the line depicting the constant acceleration and represented the total distance traveled.[55] The minus sign was used in 1489 by Johannes Widmann in Mercantile Arithmetic or Behende und hüpsche Rechenung auff allen Kauffmanschafft,.[56] Widmann used the minus symbol with the plus symbol, to indicate deficit and surplus, respectively.[57] In Summa de arithmetica, geometria, proportioni e proportionalità,[note 20][58] Luca Pacioli used symbols for plus and minus symbols and contained algebra.[note 21]
In the 15th century, Ghiyath al-Kashi computed the value of π to the 16th decimal place. Kashi also had an algorithm for calculating nth roots.[note 22] In 1533, Regiomontanus's table of sines and cosines were published.[59] Scipione del Ferro and Niccolò Fontana Tartaglia discovered solutions for cubic equations. Gerolamo Cardano published them in his 1545 book Ars Magna, together with a solution for the quartic equations, discovered by his student Lodovico Ferrari. The radical symbol[note 23] for square root was introduced by Christoph Rudolff.[note 24] Michael Stifel's important work Arithmetica integra[60] contained important innovations in mathematical notation. In 1556, Nicolo Tartaglia used parentheses for precedence grouping. In 1557 Robert Recorde published The Whetstone of Witte which used the equal sign (=) as well as plus and minus signs for the English reader. In 1564, Gerolamo Cardano analyzed games of chance beginning the early stages of probability theory. In 1572 Rafael Bombelli published his L'Algebra in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations. Simon Stevin's book De Thiende ('the art of tenths'), published in Dutch in 1585, contained a systematic treatment of decimal notation, which influenced all later work on the real number system. The New algebra (1591) of François Viète introduced the modern notational manipulation of algebraic expressions. For navigation and accurate maps of large areas, trigonometry grew to be a major branch of mathematics. Bartholomaeus Pitiscus coin the word "trigonometry", publishing his Trigonometria in 1595.
John Napier is best known as the inventor of logarithms[note 25][61] and made common the use of the decimal point in arithmetic and mathematics.[62][63] After Napier, Edmund Gunter created the logarithmic scales (lines, or rules) upon which slide rules are based, it was William Oughtred who used two such scales sliding by one another to perform direct multiplication and division; and he is credited as the inventor of the slide rule in 1622. In 1631 Oughtred introduced the multiplication sign (×) his proportionality sign,[note 26] and abbreviations sin and cos for the sine and cosine functions.[64] Albert Girard also used the abbreviations 'sin', 'cos' and 'tan' for the trigonometric functions in his treatise.
Johannes Kepler was one of the pioneers of the mathematical applications of infinitesimals.[note 27] René Descartes is credited as the father of analytical geometry, the bridge between algebra and geometry,[note 28] crucial to the discovery of infinitesimal calculus and analysis. In the 17th century, Descartes introduced Cartesian co-ordinates which allowed the development of analytic geometry.[note 29] Blaise Pascal influenced mathematics throughout his life. His Traité du triangle arithmétique ("Treatise on the Arithmetical Triangle") of 1653 described a convenient tabular presentation for binomial coefficients.[note 30] Pierre de Fermat and Blaise Pascal would investigate probability.[note 31] John Wallis introduced the infinity symbol.[note 32] He similarly used this notation for infinitesimals.[note 33] In 1657, Christiaan Huygens published the treatise on probability, On Reasoning in Games of Chance.[note 34][65]
Johann Rahn introduced the division symbol (obelus) and the therefore sign in 1659. William Jones used π in Synopsis palmariorum mathesios[66] in 1706 because it is the letter of the Greek word perimetron (περιμετρον), which means perimeter in Greek. This usage was popularized in 1737 by Euler. In 1734, Pierre Bouguer used double horizontal bar below the inequality sign.[67]
Derivatives notation: Leibniz and Newton[edit]
Derivative notations
The study of linear algebra emerged from the study of determinants, which were used to solve systems of linear equations. Calculus had two main systems of notation, each created by one of the creators: that developed by Isaac Newton and the notation developed by Gottfried Leibniz. Leibniz's is the notation used most often today. Newton's was simply a dot or dash placed above the function.[note 35] In modern usage, this notation generally denotes derivatives of physical quantities with respect to time, and is used frequently in the science of mechanics. Leibniz, on the other hand, used the letter d as a prefix to indicate differentiation, and introduced the notation representing derivatives as if they were a special type of fraction.[note 36] This notation makes explicit the variable with respect to which the derivative of the function is taken. Leibniz also created the integral symbol.[note 37] The symbol is an elongated S, representing the Latin word Summa, meaning "sum". When finding areas under curves, integration is often illustrated by dividing the area into infinitely many tall, thin rectangles, whose areas are added. Thus, the integral symbol is an elongated s, for sum.
High division operators and functions[edit]
See also: modern age
Letters of the alphabet in this time were to be used as symbols of quantity; and although much diversity existed with respect to the choice of letters, there were to be several universally recognized rules in the following history.[24] Here thus in the history of equations the first letters of the alphabet were indicatively known as coefficients, the last letters the unknown terms (an incerti ordinis). In algebraic geometry, again, a similar rule was to be observed, the last letters of the alphabet there denoting the variable or current coordinates. Certain letters, such as \pi, e, etc., were by universal consent appropriated as symbols of the frequently occurring numbers 3.14159 ..., and 2.7182818 ....,[note 38] etc., and their use in any other acceptation was to be avoided as much as possible.[24] Letters, too, were to be employed as symbols of operation, and with them other previously menition arbitrary operation characters. The letters d, elongated S were to be appropriated as operative symbols in the differential calculus and integral calculus, \Delta and ∑ in the calculus of differences.[24] In functional notation, a letter, as a symbol of operation, is combined with another which is regarded as a symbol of quantity.[24][note 39]
Beginning in 1718, Thomas Twinin used the division slash (solidus), deriving it from the earlier Arabic horizontal fraction bar. Pierre-Simon, marquis de Laplace developed the widely used Laplacian differential operator.[note 40] In 1750, Gabriel Cramer developed "Cramer's Rule" for solving linear systems. The "international mile" of 1760 international yards is exactly 1609.344 metres.[68] The kilometre,[69] a unit of length, first appeared in English in 1810[70] By 1866, the "kilometers per hour" compound unit of speed was in use in the US.[note 41][71]
Euler and prime notations[edit]
Leonhard Euler's signature
Leonhard Euler was one of the most prolific mathematicians in history, and also a prolific inventor of canonical notation. His contributions include his use of e to represent the base of natural logarithms. It is not known exactly why e was chosen, but it was probably because the four letters of the alphabet were already commonly used to represent variables and other constants. Euler used \pi to represent pi consistently. The use of \pi was suggested by William Jones, who used it as shorthand for perimeter. Euler used i to represent the square root of negative one,[note 42] although he earlier used it as an infinite number. [note 43][note 44] For summation, Euler used sigma, Σ.[note 45] For functions, Euler used the notation f(x) to represent a function of x. In 1730, Euler wrote the gamma function.[note 46] In 1736, Euler produces his paper on the Seven Bridges of Königsberg[72] regarding topology.
The mathematician, William Emerson[73] would develop the proportionality sign.[note 47][note 48][74][75] Much later in the abstract expressions of the value of various proportional phenomena, the parts-per notation would became useful as a set of pseudo units to describe small values of miscellaneous dimensionless quantities. Marquis de Condorcet, in 1768, advanced the partial differential sign.[note 49] In 1771, Alexandre-Théophile Vandermonde deduced the importance of topological features when discussing the properties of knots related to the geometry of position. Between 1772 and 1788, Joseph-Louis Lagrange re-formulated the formulas and calculations of Classical "Newtonian" mechanics, called Lagrangian mechanics. The prime symbol for derivatives was also made by Lagrange.
Gauss, Hamilton, and Matrix notations[edit]
At the turn of the 19th century, Carl Friedrich Gauss developed the identity sign for congruence relation and, in Quadratic reciprocity, the integral part. Gauss contributed functions of complex variables, in geometry, and on the convergence of series. He gave the satisfactory proofs of the fundamental theorem of algebra and of the quadratic reciprocity law. Gauss developed the theory of solving linear systems by using Gaussian elimination, which was initially listed as an advancement in geodesy.[76] He would also develop the product sign. Also in this time, Niels Henrik Abel and Évariste Galois[note 51] conducted their work on the solvability of equations, linking group theory and field theory.
After the 1800s, Christian Kramp would promote factorial notation during his research in generalized factorial function which applied to non-integers.[77] Joseph Diaz Gergonne introduced the set inclusion signs.[note 52] Peter Gustav Lejeune Dirichlet developed Dirichlet L-functions to give the proof of Dirichlet's theorem on arithmetic progressions and began analytic number theory.[note 53] In 1828, Gauss proved his Theorema Egregium (remarkable theorem in Latin), establishing property of surfaces. In the 1830s, George Green developed Green's function. In 1829. Carl Gustav Jacob Jacobi publishes Fundamenta nova theoriae functionum ellipticarum with his elliptic theta functions. By 1841, Karl Weierstrass, the "father of modern analysis", elaborated on the concept of absolute value and the determinant of a matrix.
Matrix notation would be more fully developed by Arthur Cayley in his three papers, on subjects which had been suggested by reading the Mécanique analytique[78] of Lagrange and some of the works of Laplace. Cayley defined matrix multiplication and matrix inverses. Cayley used a single letter to denote a matrix,[79] thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants,[80] and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants".[81]
William Rowan Hamilton would introduce the nabla symbol[note 55] for vector differentials.[82][83] This was previously used by Hamilton as a general-purpose operator sign.[84] Hamilton reformulated Newtonian mechanics, now called Hamiltonian mechanics. This work has proven central to the modern study of classical field theories such as electromagnetism. This was also important to the development of quantum mechanics.[note 56] In mathematics, he is perhaps best known as the inventor of quaternion notation[note 57] and biquaternions. Hamilton also introduced the word "tensor" in 1846.[85][note 58] James Cockle would develop the tessarines[note 59] and, in 1849, coquaternions. In 1848, James Joseph Sylvester introduced into matrix algebra the term matrix.[note 60]
Maxwell, Clifford, and Ricci notations[edit]
James Clerk Maxwell
Maxwell's most prominent achievement was to formulate a set of equations that united previously unrelated observations, experiments, and equations of electricity, magnetism, and optics into a consistent theory.[86]
In 1864 James Clerk Maxwell reduced all of the then current knowledge of electromagnetism into a linked set of differential equations with 20 equations in 20 variables, contained in "A Dynamical Theory of the Electromagnetic Field".[87] The method of calculation which it is necessary to employ was given by Lagrange, and afterwards developed, with some modifications, by Hamilton's equations. It is usually referred to as Hamilton's principle; when the equations in the original form are used they are known as Lagrange's equations. In 1871, he presented the Remarks on the mathematical classification of physical quantities.[88] Also in 1871, Richard Dedekind called a set of real or complex numbers which is closed under the four arithmetic operations a "field".
In 1878, William Kingdon Clifford publishes his Elements of Dynamic.[89] Clifford would develop split-biquaternions,[note 61] which he called algebraic motors. Clifford eliminated quaternion study by separating the dot product and cross product of two vectors from the complete quaternion notation.[note 62] This approach made vector calculus available to engineers and others working in three dimensions and skeptical of the lead–lag effect[note 63] in the fourth dimension.[note 64] Between 1880 and 1887, Oliver Heaviside developed the operational calculus[90] (involving the D notation for the differential operator, which he is credited with creating), a method of solving differential equations by transforming them into ordinary algebraic equations which caused a great deal of controversy when introduced, owing to the lack of rigour in his derivation of it.[note 65] The common vector notation are used when working with vectors, which are spatial or more abstract members of vector spaces. The angle notation (or phasor notation) is a notation used in electronics.
In 1881, Leopold Kronecker defined what he called a "domain of rationality", which is a field extension of the field of rational numbers in modern terms.[91] In 1882, Hüseyin Tevfik Paşa (tr) wrote the book titled "Linear Algebra".[92][93] Lord Kelvin's aetheric atom theory (1860s) led Peter Guthrie Tait, in 1885, to publish a topological table of knots with up to ten crossings known as the Tait conjectures. In 1893, Heinrich M. Weber gave the clear definition of an abstract field.[note 66] Tensor calculus was developed by Gregorio Ricci-Curbastro between 1887–96, presented in 1892 under the title absolute differential calculus,[94] and the contemporary usage of "tensor" was stated by Woldemar Voigt in 1898.[95] In 1895, Henri Poincaré published Analysis Situs.[96] In 1897, Charles Proteus Steinmetz would publish Theory and Calculation of Alternating Current Phenomena, with the assistance of Ernst J. Berg.[97]
From formula mathematics to tensors[edit]
In 1895 Giuseppe Peano issued his Formulario mathematico,[98] an effort to digest mathematics into terse text based on special symbols. He would provide a definition of a vector space and linear map. He would also introduce the intersection sign, the union sign, the membership sign (is an element of), and existential quantifier[note 68] (there exists). Peano would pass to Bertrand Russell his work in 1900 at a Paris conference; it so impressed Russell that Russell too was taken with the drive to render mathematics more concisely. The result was Principia Mathematica written with Alfred North Whitehead. This treatise marks a watershed in modern literature where symbol became dominant.[note 69] Ricci-Curbastro and Tullio Levi-Civita popularized the tensor index notation around 1900.[99]
Mathematical logic and abstraction[edit]
At the beginning of this period, Felix Klein's "Erlangen program" identified the underlying theme of various geometries, defining each of them as the study of properties invariant under a given group of symmetries. This level of abstraction revealed connections between geometry and abstract algebra. Georg Cantor[note 70] would introduce the aleph symbol for cardinal numbers of transfinite sets.[note 71] His notation for the cardinal numbers was the Hebrew letter \aleph (aleph) with a natural number subscript; for the ordinals he employed the Greek letter ω (omega). This notation is still in use today in ordinal notation of a finite sequence of symbols from a finite alphabet which names an ordinal number according to some scheme which gives meaning to the language. His theory created a great deal of controversy. Cantor would, in his study of Fourier series, consider point sets in Euclidean space.
After the turn of the 20th century, Josiah Willard Gibbs would in physical chemistry introduce middle dot for dot product and the multiplication sign for cross products. He would also supply notation for the scalar and vector products, which was introduced in Vector Analysis. In 1904, Ernst Zermelo promotes axiom of choice and his proof of the well-ordering theorem.[100] Bertrand Russell would shortly afterward introduce logical disjunction (OR) in 1906. Also in 1906, Poincaré would publish On the Dynamics of the Electron[101] and Maurice Fréchet introduced metric space.[102] Later, Gerhard Kowalewski and Cuthbert Edmund Cullis[103][104][105] would successively introduce matrices notation, parenthetical matrix and box matrix notation respectively. After 1907, mathematicians[note 72] studied knots from the point of view of the knot group and invariants from homology theory.[note 73] In 1908, Joseph Wedderburn's structure theorems were formulated for finite-dimensional algebras over a field. Also in 1908, Ernst Zermelo proposed "definite" property and the first axiomatic set theory, Zermelo set theory. In 1910 Ernst Steinitz published the influential paper Algebraic Theory of Fields.[note 74][note 75] In 1911, Steinmetz would publish Theory and Calculation of Transient Electric Phenomena and Oscillations.
Albert Einstein in 1921
Albert Einstein, in 1916, introduced the Einstein notation[note 76] which summed over a set of indexed terms in a formula, thus exerting notational brevity. Arnold Sommerfeld would create the contour integral sign in 1917. Also in 1917, Dimitry Mirimanoff proposes axiom of regularity. In 1919, Theodor Kaluza would solve general relativity equations using five dimensions, the results would have electromagnetic equations emerge.[106] This would be published in 1921 in "Zum Unitätsproblem der Physik".[107] In 1922, Abraham Fraenkel and Thoralf Skolem independently proposed replacing the axiom schema of specification with the axiom schema of replacement. Also in 1922, Zermelo–Fraenkel set theory was developed. In 1923, Steinmetz would publish Four Lectures on Relativity and Space. Around 1924, Jan Arnoldus Schouten would develop the modern notation and formalism for the Ricci calculus framework during the absolute differential calculus applications to general relativity and differential geometry in the early twentieth century.[note 77][108][109][110] In 1925, Enrico Fermi would describe a system comprising many identical particles that obey the Pauli exclusion principle, afterwards developing a diffusion equation (Fermi age equation). In 1926, Oskar Klein would develop the Kaluza–Klein theory. In 1928, Emil Artin abstracted ring theory with Artinian rings. In 1933, Andrey Kolmogorov introduces the Kolmogorov axioms. In 1937, Bruno de Finetti deduced the "operational subjective" concept.
Mathematical symbolism[edit]
Mathematical abstraction began as a process of extracting the underlying essence of a mathematical concept,[111][112] removing any dependence on real world objects with which it might originally have been connected,[113] and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena. Two abstract areas of modern mathematics are category theory and model theory. Bertrand Russell,[114] said, "Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as little as the physicist means to say". Though, one can substituted mathematics for real world objects, and wander off through equation after equation, and can build a concept structure which has no relation to reality.[115]
Symbolic logic studies the purely formal properties of strings of symbols. The interest in this area springs from two sources. First, the notation used in symbolic logic can be seen as representing the words used in philosophical logic. Second, the rules for manipulating symbols found in symbolic logic can be implemented on a computing machine. Symbolic logic is usually divided into two subfields, propositional logic and predicate logic. Other logics of interest include temporal logic, modal logic and fuzzy logic. The area of symbolic logic called propositional logic, also called propositional calculus, studies the properties of sentences formed from constants[note 78] and logical operators. The corresponding logical operations are known, respectively, as conjunction, disjunction, material conditional, biconditional, and negation. These operators are denoted as keywords[note 79] and by symbolic notation.
Some of the introduced mathematical logic notation during this time included the set of symbols used in Boolean algebra. This was created by George Boole in 1854. Boole himself did not see logic as a branch of mathematics, but it has come to be encompassed anyway. Symbols found in Boolean algebra include \land (AND), \lor (OR), and \lnot (NOT). With these symbols, and letters to represent different truth values, one can make logical statements such as a\lor\lnot a=1, that is "(a is true OR a is NOT true) is true", meaning it is true that a is either true or not true (i.e. false). Boolean algebra has many practical uses as it is, but it also was the start of what would be a large set of symbols to be used in logic.[note 80] Predicate logic, originally called predicate calculus, expands on propositional logic by the introduction of variables[note 81] and by sentences containing variables, called predicates.[note 82] In addition, predicate logic allows quantifiers.[note 83] With these logic symbols and additional quantifiers from predicate logic,[note 84] valid proofs can be made that are irrationally artificial,[note 85] but syntactical.[note 86]
Gödel incompleteness notation[edit]
While proving his incompleteness theorems,[note 87] Kurt Gödel created an alternative to the symbols normally used in logic. He used Gödel numbers, which were numbers that represented operations with set numbers, and variables with the prime numbers greater than 10. With Gödel numbers, logic statements can be broken down into a number sequence. Gödel then took this one step farther, taking the n prime numbers and putting them to the power of the numbers in the sequence. These numbers were then multiplied together to get the final product, giving every logic statement its own number.[117][note 88]
Contemporary notation and topics[edit]
Early 20th-century notation[edit]
Abstraction of notation is an ongoing process and the historical development of many mathematical topics exhibits a progression from the concrete to the abstract. Various set notations would be developed for fundamental object sets. Around 1924, David Hilbert and Richard Courant published "Methods of mathematical physics. Partial differential equations".[118] In 1926, Oskar Klein and Walter Gordon proposed the Klein–Gordon equation to describe relativistic particles.[note 89] The first formulation of a quantum theory describing radiation and matter interaction is due to Paul Adrien Maurice Dirac, who, during 1920, was first able to compute the coefficient of spontaneous emission of an atom.[119] In 1928, the relativistic Dirac equation was formulated by Dirac to explain the behavior of the relativistically moving electron.[note 90] Dirac described the quantification of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, and Werner Heisenberg, and an elegant formulation of quantum electrodynamics due to Enrico Fermi,[120] physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles.
In 1931, Alexandru Proca developed the Proca equation (Euler–Lagrange equation)[note 91] for the vector meson theory of nuclear forces and the relativistic quantum field equations. John Archibald Wheeler in 1937 develops S-matrix. Studies by Felix Bloch with Arnold Nordsieck,[121] and Victor Weisskopf,[122] in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer.[123] At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics.
In the 1930s, the double-struck capital Z for integer number sets was created by Edmund Landau. Nicolas Bourbaki created the double-struck capital Q for rational number sets. In 1935, Gerhard Gentzen made universal quantifiers. In 1936, Tarski's undefinability theorem is stated by Alfred Tarski and proved.[note 92] In 1938, Gödel proposes the constructible universe in the paper "The Consistency of the Axiom of Choice and of the Generalized Continuum-Hypothesis". André Weil and Nicolas Bourbaki would develop the empty set sign in 1939. That same year, Nathan Jacobson would coin the double-struck capital C for complex number sets.
Around the 1930s, Voigt notation[note 93] would be developed for multilinear algebra as a way to represent a symmetric tensor by reducing its order. Schönflies notation[note 94] became one of two conventions used to describe point groups (the other being Hermann–Mauguin notation). Also in this time, van der Waerden notation[124][125] became popular for the usage of two-component spinors (Weyl spinors) in four spacetime dimensions. Arend Heyting would introduce Heyting algebra and Heyting arithmetic.
The arrow, e.g., →, was developed for function notation in 1936 by Øystein Ore to denote images of specific elements.[note 95][note 96] Later, in 1940, it took its present form, e.g., f: X → Y, through the work of Witold Hurewicz. Werner Heisenberg, in 1941, proposed the S-matrix theory of particle interactions.
Paul Dirac, pictured here, made fundamental contributions to the early development of both quantum mechanics and quantum electrodynamics.
Bra–ket notation (Dirac notation) is a standard notation for describing quantum states, composed of angle brackets and vertical bars. It can also be used to denote abstract vectors and linear functionals. It is so called because the inner product (or dot product on a complex vector space) of two states is denoted by a bra|ket[note 97] consisting of a left part, ⟨φ|, and a right part, |ψ⟩. The notation was introduced in 1939 by Paul Dirac,[126] though the notation has precursors in Grassmann's use of the notation [φ|ψ] for his inner products nearly 100 years previously.[127]
Bra–ket notation is widespread in quantum mechanics: almost every phenomenon that is explained using quantum mechanics—including a large portion of modern physics—is usually explained with the help of bra–ket notation. The notation establishes an encoded abstract representation-independence, producing a versatile specific representation (e.g., x, or p, or eigenfunction base) without much ado, or excessive reliance on, the nature of the linear spaces involved. The overlap expression ⟨φ|ψ⟩ is typically interpreted as the probability amplitude for the state ψ to collapse into the state ϕ. The Feynman slash notation (Dirac slash notation[128]) was developed by Richard Feynman for the study of Dirac fields in quantum field theory.
In 1948, Valentine Bargmann and Eugene Wigner proposed the relativistic Bargmann–Wigner equations to describe free particles and the equations are in the form of multi-component spinor field wavefunctions. In 1950, William Vallance Douglas Hodge presented "The topological invariants of algebraic varieties" at the Proceedings of the International Congress of Mathematicians. Between 1954 and 1957, Eugenio Calabi worked on the Calabi conjecture for Kähler metrics and the development of Calabi–Yau manifolds. In 1957, Tullio Regge formulated the mathematical property of potential scattering in the Schrödinger equation.[note 98] Stanley Mandelstam, along with Regge, did the initial development of the Regge theory of strong interaction phenomenology. In 1958, Murray Gell-Mann and Richard Feynman, along with George Sudarshan and Robert Marshak, deduced the chiral structures of the weak interaction in physics. Geoffrey Chew, along with others, would promote matrix notation for the strong interaction, and the associated bootstrap principle, in 1960. In the 1960s, set-builder notation was developed for describing a set by stating the properties that its members must satisfy. Also in the 1960s, tensors are abstracted within category theory by means of the concept of monoidal category. Later, multi-index notation eliminates conventional notions used in multivariable calculus, partial differential equations, and the theory of distributions, by abstracting the concept of an integer index to an ordered tuple of indices.
Modern mathematical notation[edit]
In the modern mathematics of special relativity, electromagnetism and wave theory, the d'Alembert operator[note 99][note 100] is the Laplace operator of Minkowski space. The Levi-Civita symbol[note 101] is used in tensor calculus.
After the full Lorentz covariance formulations that were finite at any order in a perturbation series of quantum electrodynamics, Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with a Nobel prize in physics in 1965.[129] Their contributions, and those of Freeman Dyson, were about covariant and gauge invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Quantum electrodynamics has served as the model and template for subsequent quantum field theories. Peter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. In the late 1960s, the particle zoo was composed of the then known elementary particles before the discovery of quarks.
Standard model of elementary particles.
The fundamental fermions and the fundamental bosons. (c.2008)[note 102] Based on the proprietary publication, Review of Particle Physics.[note 103]
A step towards the Standard Model was Sheldon Glashow's discovery, in 1960, of a way to combine the electromagnetic and weak interactions.[130] In 1967, Steven Weinberg[131] and Abdus Salam[132] incorporated the Higgs mechanism[133][134][135] into Glashow's electroweak theory, giving it its modern form. The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions - i.e. the quarks and leptons. Also in 1967, Bryce DeWitt published his equation under the name "Einstein–Schrödinger equation" (later renamed the "Wheeler–DeWitt equation").[136] In 1969, Yoichiro Nambu, Holger Bech Nielsen, and Leonard Susskind descried space and time in terms of strings. In 1970, Pierre Ramond develop two-dimensional supersymmetries. Michio Kaku and Keiji Kikkawa would afterwards formulate string variations. In 1972, Michael Artin, Alexandre Grothendieck, Jean-Louis Verdier propose the Grothendieck universe.[137]
After the neutral weak currents caused by Z boson exchange were discovered at CERN in 1973,[138][139][140][141] the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The theory of the strong interaction, to which many contributed, acquired its modern form around 1973–74. With the establishment of quantum chromodynamics, a finalized a set of fundamental and exchange particles, which allowed for the establishment of a "standard model" based on the mathematics of gauge invariance, which successfully described all forces except for gravity, and which remains generally accepted within the domain to which it is designed to be applied. In the late 1970s, William Thurston introduced hyperbolic geometry into the study of knots with the hyperbolization theorem. The orbifold notation system, invented by Thurston, has been developed for representing types of symmetry groups in two-dimensional spaces of constant curvature. In 1978, Shing-Tung Yau deduced that the Calabi conjecture have Ricci flat metrics. In 1979, Daniel Friedan showed that the equations of motions of string theory are abstractions of Einstein equations of General Relativity.
The first superstring revolution is composed of mathematical equations developed between 1984 and 1986. In 1984, Vaughan Jones deduced the Jones polynomial and subsequent contributions from Edward Witten, Maxim Kontsevich, and others, revealed deep connections between knot theory and mathematical methods in statistical mechanics and quantum field theory. According to string theory, all particles in the "particle zoo" have a common ancestor, namely a vibrating string. In 1985, Philip Candelas, Gary Horowitz,[142] Andrew Strominger, and Edward Witten would publish "Vacuum configurations for superstrings"[143] Later, the tetrad formalism (tetrad index notation) would be introduced as an approach to general relativity that replaces the choice of a coordinate basis by the less restrictive choice of a local basis for the tangent bundle.[note 104][144]
In the 1990s, Roger Penrose would propose Penrose graphical notation (tensor diagram notation) as a, usually handwritten, visual depiction of multilinear functions or tensors.[145] Penrose would also introduce abstract index notation.[note 105] In 1995, Edward Witten suggested M-theory and subsequently used it to explain some observed dualities, initiating the second superstring revolution.[note 106]
John H Conway, prolific mathematician of notation.
John Conway would further various notations, including the Conway chained arrow notation, the Conway notation of knot theory, and the Conway polyhedron notation. The Coxeter notation system classifies symmetry groups, describing the angles between with fundamental reflections of a Coxeter group. It uses a bracketed notation, with modifiers to indicate certain subgroups. The notation is named after H. S. M. Coxeter and Norman Johnson more comprehensively defined it.
Combinatorial LCF notation[note 107] has been developed for the representation of cubic graphs that are Hamiltonian.[146][147] The cycle notation is the convention for writing down a permutation in terms of its constituent cycles.[148] This is also called circular notation and the permutation called a cyclic or circular permutation.[149]
Computers and markup notation[edit]
In 1931, IBM produces the IBM 601 Multiplying Punch; it is an electromechanical machine that could read two numbers, up to 8 digits long, from a card and punch their product onto the same card.[150] In 1934, Wallace Eckert used a rigged IBM 601 Multiplying Punch to automate the integration of differential equations.[151] In 1936, Alan Turing publishes "On Computable Numbers, With an Application to the Entscheidungsproblem".[152][note 108] John von Neumann, pioneer of the digital computer and of computer science,[note 109] in 1945, writes the incomplete First Draft of a Report on the EDVAC. In 1962, Kenneth E. Iverson developed an integral part notation that became known as Iverson Notation for manipulating arrays that he taught to his students, and described in his book A Programming Language. In 1970, E.F. Codd proposed relational algebra as a relational model of data for database query languages. In 1971, Stephen Cook publishes "The complexity of theorem proving procedures"[153] In the 1970s within computer architecture, Quote notation was developed for a representing number system of rational numbers. Also in this decade, the Z notation (just like the APL language, long before it) uses many non-ASCII symbols, the specification includes suggestions for rendering the Z notation symbols in ASCII and in LaTeX. There are presently various C mathematical functions (Math.h) and numerical libraries. They are libraries used in software development for performing numerical calculations. These calculations can be handled by symbolic executions; analyzing a program to determine what inputs cause each part of a program to execute. Mathematica and SymPy are examples of computational software programs based on symbolic mathematics.
Future of mathematical notation[edit]
Main article: Future of mathematics
A section of a quintic Calabi–Yau three-fold (3D projection); recalling atomic vortex theory.
In the history of mathematical notation, ideographic symbol notation has come full circle with the rise of computer visualization systems. The notations can be applied to abstract visualizations, such as for rendering some projections of a Calabi-Yau manifold. Examples of abstract visualization which properly belong to the mathematical imagination can be found in computer graphics. The need for such models abounds, for example, when the measures for the subject of study are actually random variables and not really ordinary mathematical functions.
See also[edit]
Main relevance
Abuse of notation, Well-formed formula, Big O notation (L-notation), Dowker notation, Hungarian notation, Infix notation, Positional notation, Polish notation (Reverse Polish notation), Sign-value notation, Subtractive notation, infix notation, History of writing numbers
Numbers and quantities
List of numbers, Irrational and suspected irrational numbers, γ, ζ(3), 2, 3, 5, φ, ρ, δS, α, e, π, δ, Physical constants, c, ε0, h, G, Greek letters used in mathematics, science, and engineering
General relevance
Order of operations, Scientific notation (Engineering notation), Actuarial notation
Dot notation
Chemical notation (Lewis dot notation (Electron dot notation)), Dot-decimal notation
Arrow notation
Knuth's up-arrow notation, infinitary combinatorics (Arrow notation (Ramsey theory))
Projective geometry, Affine geometry, Finite geometry
Lists and outlines
Outline of mathematics (Mathematics history topics and Mathematics topics (Mathematics categories)), Mathematical theories ( First-order theories, Theorems and Disproved mathematical ideas), Mathematical proofs (Incomplete proofs), Mathematical identities, Mathematical series, Mathematics reference tables, Mathematical logic topics, Mathematics-based methods, Mathematical functions, Transforms and Operators, Points in mathematics, Mathematical shapes, Knots (Prime knots and Mathematical knots and links), Inequalities, Mathematical concepts named after places, Mathematical topics in classical mechanics, Mathematical topics in quantum theory, Mathematical topics in relativity, String theory topics, Unsolved problems in mathematics, Mathematical jargon, Mathematical examples, Mathematical abbreviations, List of mathematical symbols
Hilbert's problems, Mathematical coincidence, Chess notation, Line notation, Musical notation (Dotted note), Whyte notation, Dice notation, recursive categorical syntax
Mathematicians (Amateur mathematicians and Female mathematicians), Thomas Bradwardine, Thomas Harriot, Felix Hausdorff, Gaston Julia, Helge von Koch, Paul Lévy, Aleksandr Lyapunov, Benoit Mandelbrot, Lewis Fry Richardson, Wacław Sierpiński, Saunders Mac Lane, Paul Cohen, Gottlob Frege, G. S. Carr, Robert Recorde, Bartel Leendert van der Waerden, G. H. Hardy, E. M. Wright, James R. Newman, Carl Gustav Jacob Jacobi, Roger Joseph Boscovich, Eric W. Weisstein, Mathematical probabilists, Statisticians
Further reading[edit]
1. ^ Or the Middle Ages.
2. ^ Such characters, in fact, are preserved with little alteration in the Roman notation, an account of which may be found in John Leslie's Philosophy of Arithmetic.
3. ^ Number theory is branch of pure mathematics devoted primarily to the study of the integers. Number theorists study prime numbers as well as the properties of objects made out of integers (e.g., rational numbers) or defined as generalizations of the integers (e.g., algebraic integers).
4. ^ Greek: μή μου τοὺς κύκλους τάραττε
5. ^ That is, a^2 + b^2 = c^2.
6. ^ Magnitude (mathematics), the relative size of an object ; Magnitude (vector), a term for the size or length of a vector; Scalar (mathematics), a quantity defined only by its magnitude; Euclidean vector, a quantity defined by both its magnitude and its direction; Order of magnitude, the class of scale having a fixed value ratio to the preceding class.
7. ^ Autolycus' On the Moving Sphere is another ancient mathematical manuscript of the time.
9. ^ The expression:
would be written as:
SS2 C3 x5 M S4 u6
.[citation needed]
10. ^ such as the rule, square, compasses, water level (reed level), and plumb-bob.
11. ^ such as the wheel and axle
12. ^ The area of the square described on the hypotenuse of a right-angled triangle is equal to the sum of the areas of the squares described on the sides
13. ^ Al-Kindi also introduced cryptanalysis and frequency analysis.
14. ^ Something close to a proof by mathematical induction appears in a book written by Al-Karaji around 1000 AD, who used it to prove the binomial theorem, Pascal's triangle, and the sum of integral cubes.
15. ^ He thus came close to finding a general formula for the integrals of polynomials, but he was not concerned with any polynomials higher than the fourth degree.
16. ^ a book about what he perceived as flaws in Euclid's Elements, especially the parallel postulate
17. ^ translated into Latin by Robert of Chester
18. ^ translated in various versions by Adelard of Bath, Herman of Carinthia, and Gerard of Cremona
19. ^ His own personal use started around 1351.
20. ^ Summa de Arithmetica: Geometria Proportioni et Proportionalita. Tr. Sum of Arithmetic: Geometry in proportions and proportionality.
21. ^ Much of the work originated from Piero Della Francesca whom he appropriated and purloined.
22. ^ This was a special case of the methods given many centuries later by Ruffini and Horner.
23. ^ That is, \sqrt{~}.
24. ^ Because, it is thought, it resembled a lowercase "r" (for "radix").
25. ^ Published in Description of the Marvelous Canon of Logarithms
26. ^ That is,
27. ^ see Law of Continuity.
28. ^ Using Cartesian coordinates on the plane, the distance between two points (x1y1) and (x2y2) is defined by the formula:
d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2},\!
which can be viewed as a version of the Pythagorean theorem.
30. ^ Now called Pascal's triangle.
31. ^ For example, the "problem of points".
32. ^ That is, {\infty}.
33. ^ For example, \frac{1}{\infty}.
34. ^ Original title, "De ratiociniis in ludo aleae"
35. ^ For example, the derivative of the function x would be written as \dot{x}. The second derivative of x would be written as \ddot{x}, etc.
36. ^ For example, the derivative of the function x with respect to the variable t in Leibniz's notation would be written as { dx \over dt }.
37. ^ That is, \int_{-N}^{N} f(x)\, dx.
38. ^ See also: List of representations of e
39. ^ Thus f(x) denotes the mathematical result of the performance of the operation f upon the subject x. If upon this result the same operation were repeated, the new result would be expressed by f[f(x)], or more concisely by f^2(x), and so on. The quantity x itself regarded as the result of the same operation f upon some other function; the proper symbol for which is, by analogy, f^{-1} (x). Thus f and f^{-1} are symbols of inverse operations, the former cancelling the effect of the latter on the subject x. f(x) and f^{-1} (x) in a similar manner are termed inverse functions.
40. ^ That is, \Delta f(p)
41. ^ See also: Non-SI units mentioned in the SI
42. ^ That is, \sqrt{-1}
43. ^ Today, the symbol created by John Wallis, \infty, is used for infinity.
44. ^ As in, \sum_{n=1}^\infty\frac{1}{n^2}
45. ^ Capital-sigma notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, , an enlarged form of the upright capital Greek letter Sigma. This is defined as:
\sum_{i=m}^n a_i = a_m + a_{m+1} + a_{m+2} +\cdots+ a_{n-1} + a_n.
Where, i represents the index of summation; ai is an indexed variable representing each successive term in the series; m is the lower bound of summation, and n is the upper bound of summation. The "i = m" under the summation symbol means that the index i starts out equal to m. The index, i, is incremented by 1 for each successive term, stopping when i = n.
46. ^ That is, n!=\int_{0}^{1}(-\ln s)^{n}\,{\rm d}s\,,.
valid for n > 0.
47. ^ That is,
48. ^ Proportionality is the ratio of one quantity to another, especially the ratio of a part compared to a whole. In a mathematical context, a proportion is the statement of equality between two ratios; See Proportionality (mathematics), the relationship of two variables whose ratio is constant. See also aspect ratio, geometric proportions.
49. ^ The curly d or Jacobi's delta.
50. ^ About the proof of Wilson's theorem. Disquisitiones Arithmeticae (1801) Article 76
51. ^ Galois theory and Galois geometry is named after him.
52. ^ That is, "subset of" and "superset of"; This would later be redeveloped by Ernst Schröder.
53. ^ A science of numbers that uses methods from mathematical analysis to solve problems about the integers.
54. ^ quoted in Robert Percival Graves' "Life of Sir William Rowan Hamilton" (3 volumes, 1882, 1885, 1889)
55. ^ That is, \nabla (or, later called del, ∇)
56. ^ See Hamiltonian (quantum mechanics).
57. ^ That is, i^2=j^2=k^2=ijk=-1
58. ^ Though his use describes something different from what is now meant by a tensor. Namely, the norm operation in a certain type of algebraic system (now known as a Clifford algebra).
59. ^ That is,
t = w + x i + y j + z k, \quad w, x, y, z \in \mathbb{R}
i j = j i = k, \quad i^2 = -1, \quad j^2 = +1 .
60. ^ This is Latin for "womb".
61. ^ That is, q = w + xi + yj + zk \!
62. ^ Clifford intersected algebra with Hamilton's quaternions by replacing Hermann Grassmann's rule epep = 0 by the rule epep = 1. For more details, see exterior algebra.
63. ^ See: Phasor, Group (mathematics), Signal velocity, Polyphase system, Harmonic oscillator, and RLC series circuit
64. ^ Or the concept of a fourth spatial dimension. See also: Spacetime, the unification of time and space as a four-dimensional continuum; and, Minkowski space, the mathematical setting for special relativity.
66. ^ See also: Mathematic fields and Field extension
67. ^ Comment after the proof that 1+1=2, completed in Principia mathematica, by Alfred North Whitehead ... and Bertrand Russell. Volume II, 1st edition (1912)
68. ^ This raises questions of the pure existence theorems.
69. ^ Peano's Formulario Mathematico, though less popular than Russell's work, continued through five editions. The fifth appeared in 1908 and included 4200 formulas and theorems.
70. ^ Inventor of set theory
71. ^ Transfinite arithmetic is the generalization of elementary arithmetic to infinite quantities like infinite sets; See Transfinite numbers, Transfinite induction, and Transfinite interpolation. See also Ordinal arithmetic.
72. ^ Such as Max Dehn, J. W. Alexander, and others.
73. ^ Such as the Alexander polynomial.
74. ^ (German: Algebraische Theorie der Körper)
75. ^ In this paper Steinitz axiomatically studied the properties of fields and defined many important field theoretic concepts like prime field, perfect field and the transcendence degree of a field extension.
76. ^ The indices range over set {1, 2, 3},
y = \sum_{i=1}^3 c_i x^i = c_1 x^1 + c_2 x^2 + c_3 x^3
is reduced by the convention to:
y = c_i x^i \,.
Upper indices are not exponents but are indices of coordinates, coefficients or basis vectors.
See also: Ricci calculus
77. ^ Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields. See also: Synge J.L., Schild A. (1949). Tensor Calculus. first Dover Publications 1978 edition. pp. 6–108.
78. ^ Here a logical constant is a symbol in symbolic logic that has the same meaning in all models, such as the symbol "=" for "equals".
A constant, in a mathematical context, is a number that arises naturally in mathematics, such as π or e; Such mathematics constant value do not change. It can mean polynomial constant term (the term of degree 0) or the constant of integration, a free parameter arising in integration.
Related, the physical constant are a physical quantity generally believed to be universal and unchanging. Programming constants are a values that, unlike a variable, cannot be reassociated with a different value.
79. ^ Though not an index term, keywords are terms that represent information. A keyword is a word with special meaning (this is a semantic definition), while syntactically these are terminal symbols in the phrase grammar. See reserved word for the related concept.
80. ^ Most of these symbols can be found in propositional calculus, a formal system described as \mathcal{L} = \mathcal{L}\ (\Alpha,\ \Omega,\ \Zeta,\ \Iota). \Alpha is the set of elements, such as the a in the example with Boolean algebra above. \Omega is the set that contains the subsets that contain operations, such as \lor or \land. \Zeta contains the inference rules, which are the rules dictating how inferences may be logically made, and \Iota contains the axioms. See also: Basic and Derived Argument Forms.
81. ^ Usually denoted by x, y, z, or other lowercase letters
Here a symbols that represents a quantity in a mathematical expression, a mathematical variable as used in many sciences.
Variables can be symbolic name associated with a value and whose associated value may be changed, known in computer science as a variable reference. A variable can also be the operationalized way in which the attribute is represented for further data processing (e.g., a logical set of attributes). See also: Dependent and independent variables in statistics.
82. ^ Usually denoted by an uppercase letter followed by a list of variables, such as P(x) or Q(y,z)
Here a mathematical logic predicate, a fundamental concept in first-order logic. Grammatical predicates are grammatical components of a sentence.
Related is the syntactic predicate in parser technology which are guidelines for the parser process. In computer programming, a branch predication allows a choice to execute or not to execute a given instruction based on the content of a machine register.
83. ^ Representing ALL and EXISTS
84. ^ e.g. ∃ for "there exists" and ∀ for "for all"
85. ^ See also: Dialetheism, Contradiction, and Paradox
86. ^ Related, facetious abstract nonsense describes certain kinds of arguments and methods related to category theory which resembles comical literary non sequitur devices (not illogical non sequiturs).
87. ^ Gödel's incompleteness theorems shows that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible, giving a contested negative answer to Hilbert's second problem
88. ^ For example, take the statement "There exists a number x such that it is not y". Using the symbols of propositional calculus, this would become: (\exists x)(x=\lnot y).
If the Gödel numbers replace the symbols, it becomes:\{8, 4, 11, 9, 8, 11, 5, 1, 13, 9\}.
There are ten numbers, so the ten prime numbers are found and these are: \{2, 3, 5, 7, 11, 13, 17, 19, 23, 29\}.
Then, the Gödel numbers are made the powers of the respective primes and multiplied, giving: 2^8\times3^4\times5^{11}\times7^9\times11^8\times13^{11}\times17^5\times19^1\times23^{13}\times29^9.
The resulting number is approximately 3.096262735\times10^{78}.
89. ^ The Klein–Gordon equation is:
90. ^ The Dirac equation in the form originally proposed by Dirac is:
\left(\beta mc^2 + \sum_{k = 1}^3 \alpha_k p_k \, c\right) \psi (\mathbf{x},t) = i \hbar \frac{\partial\psi(\mathbf{x},t) }{\partial t}
where, ψ = ψ(x, t) is the wave function for the electron, x and t are the space and time coordinates, m is the rest mass of the electron, p is the momentum, understood to be the momentum operator in the Schrödinger theory, c is the speed of light, and ħ = h/2π is the reduced Planck constant.
91. ^ That is,
\partial_\mu(\partial^\mu A^\nu - \partial^\nu A^\mu)+\left(\frac{mc}{\hbar}\right)^2 A^\nu=0
93. ^ Named to honor Voigt's 1898 work.
94. ^ Named after Arthur Moritz Schoenflies
95. ^ See Galois connections.
96. ^ Oystein Ore would also write "Number Theory and Its History".
97. ^ \langle\phi|\psi\rangle
98. ^ That the scattering amplitude can be thought of as an analytic function of the angular momentum, and that the position of the poles determine power-law growth rates of the amplitude in the purely mathematical region of large values of the cosine of the scattering angle.
99. ^ That is, \scriptstyle\Box
100. ^ Also known as the d'Alembertian or wave operator.
101. ^ Also known as, "permutation symbol" (see: permutation), "antisymmetric symbol" (see: antisymmetric), or "alternating symbol"
102. ^ Note that "masses" (e.g., the coherent non-definite body shape) of particles are periodically reevaluated by the scientific community. The values may have been adjusted; adjustment by operations carried out on instruments in order that it provides given indications corresponding to given values of the measurand. In engineering, mathematics, and geodesy, the optimal parameter such estimation of a mathematical model so as to best fit a data set.
103. ^ For the consensus, see Particle Data Group.
104. ^ A locally defined set of four linearly independent vector fields called a tetrad
105. ^ His usage of the Einstein summation was in order to offset the inconvenience in describing contractions and covariant differentiation in modern abstract tensor notation, while maintaining explicit covariance of the expressions involved.
106. ^ See also: String theory landscape and Swampland
107. ^ Devised by Joshua Lederberg and extended by Coxeter and Frucht
108. ^ And, in 1938, "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction" (Proceedings of the London Mathematical Society, 2 (1937) 43 (6): 544–6, doi:10.1112/plms/s2-43.6.544).
109. ^ Among von Neumann's other contributions include the application of operator theory to quantum mechanics, in the development of functional analysis, and on various forms of operator theory.
References and citations[edit]
1. ^ Florian Cajori. A History of Mathematical Notations: Two Volumes in One. Cosimo, Inc., Dec 1, 2011
2. ^ A Dictionary of Science, Literature, & Art, Volume 2. Edited by William Thomas Brande, George William Cox. Pg 683
3. ^ "Notation - from Wolfram MathWorld". Mathworld.wolfram.com. Retrieved 2014-06-24.
4. ^ Diophantos of Alexandria: A Study in the History of Greek Algebra. By Sir Thomas Little Heath. Pg 77.
5. ^ Mathematics: Its Power and Utility. By Karl J. Smith. Pg 86.
6. ^ The Commercial Revolution and the Beginnings of Western Mathematics in Renaissance Florence, 1300-1500. Warren Van Egmond. 1976. Page 233.
7. ^ Solomon Gandz. "The Sources of al-Khowarizmi's Algebra"
8. ^ Encyclopædia Americana. By Thomas Gamaliel Bradford. Pg 314
9. ^ Mathematical Excursion, Enhanced Edition: Enhanced Webassign Edition By Richard N. Aufmann, Joanne Lockwood, Richard D. Nation, Daniel K. Cleg. Pg 186
10. ^ Mathematics in Egypt and Mesopotamia[dead link]
11. ^ Boyer, C. B. A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach. New York: Wiley, 1989 ISBN 0-471-09763-2 (1991 pbk ed. ISBN 0-471-54397-7). "Mesopotamia" p. 25.
13. ^ Aaboe, Asger (1998). Episodes from the Early History of Mathematics. New York: Random House. pp. 30–31.
14. ^ Heath. A Manual of Greek Mathematics. p. 5.
15. ^ Sir Thomas L. Heath, A Manual of Greek Mathematics, Dover, 1963, p. 1: "In the case of mathematics, it is the Greek contribution which it is most essential to know, for it was the Greeks who made mathematics a science."
16. ^ a b The new encyclopædia; or, Universal dictionary of arts and sciences. By Encyclopaedia Perthensi. Pg 49
20. ^ "Proclus' Summary". Gap.dcs.st-and.ac.uk. Retrieved 2014-06-24.
21. ^ Caldwell, John (1981) "The De Institutione Arithmetica and the De Institutione Musica", pp. 135–54 in Margaret Gibson, ed., Boethius: His Life, Thought, and Influence, (Oxford: Basil Blackwell).
22. ^ Folkerts, Menso, "Boethius" Geometrie II, (Wiesbaden: Franz Steiner Verlag, 1970).
23. ^ Mathematics and Measurement By Oswald Ashton Wentworth Dilk. Pg 14
24. ^ a b c d e A dictionary of science, literature and art, ed. by W.T. Brande. Pg 683
25. ^ Boyer, Carl B. A History of Mathematics, 2nd edition, John Wiley & Sons, Inc., 1991.
26. ^ Diophantine Equations. Submitted by: Aaron Zerhusen, Chris Rakes, & Shasta Meece. MA 330-002. Dr. Carl Eberhart. February 16, 1999.
27. ^ A History of Greek Mathematics: From Aristarchus to Diophantus. By Sir Thomas Little Heath. Pg 456
28. ^ A History of Greek Mathematics: From Aristarchus to Diophantus. By Sir Thomas Little Heath. Pg 458
29. ^ The American Mathematical Monthly, Volume 16. Pg 131
30. ^ "Overview of Chinese mathematics". Groups.dcs.st-and.ac.uk. Retrieved 2014-06-24.
31. ^ George Gheverghese Joseph, The Crest of the Peacock: Non-European Roots of Mathematics,Penguin Books, London, 1991, pp.140—148
32. ^ Georges Ifrah, Universalgeschichte der Zahlen, Campus, Frankfurt/New York, 1986, pp.428—437
33. ^ "Frank J. Swetz and T. I. Kao: Was Pythagoras Chinese?". Psupress.psu.edu. Retrieved 2014-06-24.
35. ^ Sal Restivo
36. ^ Marcel Gauchet, 151.
37. ^ Boyer, C. B. A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach. New York: Wiley, 1989 ISBN 0-471-09763-2 (1991 pbk ed. ISBN 0-471-54397-7). "China and India" p. 221. (cf., "he was the first one to give a general solution of the linear Diophantine equation ax + by = c, where a, b, and c are integers. [...] It is greatly to the credit of Brahmagupta that he gave all integral solutions of the linear Diophantine equation, whereas Diophantus himself had been satisfied to give one particular solution of an indeterminate equation. Inasmuch as Brahmagupta used some of the same examples as Diophantus, we see again the likelihood of Greek influence in India – or the possibility that they both made use of a common source, possibly from Babylonia. It is interesting to note also that the algebra of Brahmagupta, like that of Diophantus, was syncopated. Addition was indicated by juxtaposition, subtraction by placing a dot over the subtrahend, and division by placing the divisor below the dividend, as in our fractional notation but without the bar. The operations of multiplication and evolution (the taking of roots), as well as unknown quantities, were represented by abbreviations of appropriate words.")
38. ^ Robert Kaplan, "The Nothing That Is: A Natural History of Zero", Allen Lane/The Penguin Press, London, 1999
39. ^ ""The ingenious method of expressing every possible number using a set of ten symbols (each symbol having a place value and an absolute value) emerged in India. The idea seems so simple nowadays that its significance and profound importance is no longer appreciated. Its simplicity lies in the way it facilitated calculation and placed arithmetic foremost amongst useful inventions. the importance of this invention is more readily appreciated when one considers that it was beyond the two greatest men of Antiquity, Archimedes and Apollonius." - Pierre-Simon Laplace". History.mcs.st-and.ac.uk. Retrieved 2014-06-24.
40. ^ A.P. Juschkewitsch, "Geschichte der Mathematik im Mittelalter", Teubner, Leipzig, 1964
41. ^ Boyer, C. B. A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach. New York: Wiley, 1989 ISBN 0-471-09763-2 (1991 pbk ed. ISBN 0-471-54397-7). "The Arabic Hegemony" p. 230. (cf., "The six cases of equations given above exhaust all possibilities for linear and quadratic equations having positive root. So systematic and exhaustive was al-Khwārizmī's exposition that his readers must have had little difficulty in mastering the solutions.")
42. ^ Gandz and Saloman (1936), The sources of Khwarizmi's algebra, Osiris i, pp. 263–77: "In a sense, Khwarizmi is more entitled to be called "the father of algebra" than Diophantus because Khwarizmi is the first to teach algebra in an elementary form and for its own sake, Diophantus is primarily concerned with the theory of numbers".
43. ^ Boyer, C. B. A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach. New York: Wiley, 1989 ISBN 0-471-09763-2 (1991 pbk ed. ISBN 0-471-54397-7). "The Arabic Hegemony" p. 229. (cf., "It is not certain just what the terms al-jabr and muqabalah mean, but the usual interpretation is similar to that implied in the translation above. The word al-jabr presumably meant something like "restoration" or "completion" and seems to refer to the transposition of subtracted terms to the other side of an equation; the word muqabalah is said to refer to "reduction" or "balancing" - that is, the cancellation of like terms on opposite sides of the equation.")
44. ^ Rashed, R.; Armstrong, Angela (1994). The Development of Arabic Mathematics. Springer. pp. 11–12. ISBN 0-7923-2565-6. OCLC 29181926.
45. ^ Victor J. Katz (1998). History of Mathematics: An Introduction, pp. 255–59. Addison-Wesley. ISBN 0-321-01618-1.
46. ^ F. Woepcke (1853). Extrait du Fakhri, traité d'Algèbre par Abou Bekr Mohammed Ben Alhacan Alkarkhi. Paris.
47. ^ Victor J. Katz (1995), "Ideas of Calculus in Islam and India", Mathematics Magazine 68 (3): 163–74.
48. ^ Marie-Thérèse d'Alverny, "Translations and Translators", pp. 421–62 in Robert L. Benson and Giles Constable, Renaissance and Renewal in the Twelfth Century, (Cambridge: Harvard University Press, 1982).
49. ^ Guy Beaujouan, "The Transformation of the Quadrivium", pp. 463–87 in Robert L. Benson and Giles Constable, Renaissance and Renewal in the Twelfth Century, (Cambridge: Harvard University Press, 1982).
50. ^ a b O'Connor, John J.; Robertson, Edmund F., "Abu'l Hasan ibn Ali al Qalasadi", MacTutor History of Mathematics archive, University of St Andrews .
51. ^ Boyer, C. B. A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach. New York: Wiley, 1989 ISBN 0-471-09763-2 (1991 pbk ed. ISBN 0-471-54397-7). "Revival and Decline of Greek Mathematics" p. 178 (cf., "The chief difference between Diophantine syncopation and the modern algebraic notation is the lack of special symbols for operations and relations, as well as of the exponential notation.")
52. ^ Grant, Edward and John E. Murdoch (1987), eds., Mathematics and Its Applications to Science and Natural Philosophy in the Middle Ages, (Cambridge: Cambridge University Press) ISBN 0-521-32260-X.
53. ^ Mathematical Magazine, Volume 1. Artemas Martin, 1887. Pg 124
54. ^ Der Algorismus proportionum des Nicolaus Oresme: Zum ersten Male nach der Lesart der Handschrift R.40.2. der Königlichen Gymnasial-bibliothek zu Thorn. Nicole Oresme. S. Calvary & Company, 1868.
55. ^ Clagett, Marshall (1961) The Science of Mechanics in the Middle Ages, (Madison: University of Wisconsin Press), pp. 332–45, 382–91.
56. ^ Later early modern version: A New System of Mercantile Arithmetic: Adapted to the Commerce of the United States, in Its Domestic and Foreign Relations with Forms of Accounts and Other Writings Usually Occurring in Trade. By Michael Walsh. Edmund M. Blunt (proprietor.), 1801.
57. ^ Miller, Jeff (4 June 2006). "Earliest Uses of Symbols of Operation". Gulf High School. Retrieved 24 September 2006.
58. ^ Arithmetical Books from the Invention of Printing to the Present Time. By Augustus De Morgan. p 2.
59. ^ Grattan-Guinness, Ivor (1997). The Rainbow of Mathematics: A History of the Mathematical Sciences. W.W. Norton. ISBN 0-393-32030-8.
60. ^ Arithmetica integra. By Michael Stifel, Philipp Melanchton. Norimbergæ: Apud Iohan Petreium, 1544.
61. ^ The History of Mathematics By Anne Roone. Pg 40
62. ^ Memoirs of John Napier of Merchiston. By Mark Napier
63. ^ An Account of the Life, Writings, and Inventions of John Napier, of Merchiston. By David Stewart Erskine Earl of Buchan, Walter Minto
64. ^ Florian Cajori (1919). A History of Mathematics. Macmillan.
65. ^ Jan Gullberg, Mathematics from the birth of numbers, W. W. Norton & Company; ISBN 978-0-393-04002-9 . pg 963-965,
66. ^ Synopsis Palmariorum Matheseos. By William Jones. 1706. (Alt: Synopsis Palmariorum Matheseos: or, a New Introduction to the Mathematics. archive.org.)
67. ^ When Less is More: Visualizing Basic Inequalities.By Claudi Alsina, Roger B. Nelse. Pg 18.
69. ^ The Compact Edition of the Oxford English Dictionary. Oxford University Press. 1971. p. 695.
70. ^ "The Oxford English Dictionary". Retrieved July 13, 2012.
72. ^ Euler, Leonhard, Solutio problematis ad geometriam situs pertinentis
73. ^ The elements of geometry. By William Emerson
74. ^ The Doctrine of Proportion, Arithmetical and Geometrical. Together with a General Method of Arening by Proportional Quantities. By William Emerson.
75. ^ The Mathematical Correspondent. By George Baron. 83
76. ^ Vitulli, Marie. "A Brief History of Linear Algebra and Matrix Theory". Department of Mathematics. University of Oregon. Retrieved 2012-01-24.
77. ^ "Kramp biography". History.mcs.st-and.ac.uk. Retrieved 2014-06-24.
78. ^ Mécanique analytique: Volume 1, Volume 2. By Joseph Louis Lagrange. Ms. Ve Courcier, 1811.
79. ^ The collected mathematical papers of Arthur Cayley. Volume 11. Page 243.
80. ^ Historical Encyclopedia of Natural and Mathematical Sciences, Volume 1. By Ari Ben-Menahem. Pg 2070.
81. ^ Vitulli, Marie. "A Brief History of Linear Algebra and Matrix Theory". Department of Mathematics. University of Oregon. Originally at: darkwing.uoregon.edu/~vitulli/441.sp04/LinAlgHistory.html
82. ^ The Words of Mathematics. By Steven Schwartzman. 6.
83. ^ Electro-Magnetism: Theory and Applications. By A. Pramanik. 38
84. ^ History of Nabla and Other Math Symbols. homepages.math.uic.edu/~hanson.
85. ^ Hamilton, William Rowan (1854–1855). Wilkins, David R., ed. "On some Extensions of Quaternions" (PDF). Philosophical Magazine (7–9): 492–499, 125–137, 261–269, 46–51, 280–290. ISSN 0302-7597.
86. ^ "James Clerk Maxwell". IEEE Global History Network. Retrieved 25 March 2013.
87. ^ Maxwell, James Clerk (1865). "A dynamical theory of the electromagnetic field" (PDF). Philosophical Transactions of the Royal Society of London 155: 459–512. doi:10.1098/rstl.1865.0008. (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.)
88. ^ Proceedings of the London Mathematical Society, Volume 3. London Mathematical Society, 1871. Pg. 224
89. ^ Books I, II, III (1878) on Internet Archive; Book IV (1887) on Internet Archive
90. ^ The Heaviside Operational Calculus www.quadritek.com/bstj/vol01-1922/articles/bstj1-2-43.pdf
91. ^ Cox, David A. (2012). Galois Theory. Pure and Applied Mathematics 106 (2nd ed.). John Wiley & Sons. p. 348. ISBN 1118218426.
92. ^ "TÜBİTAK ULAKBİM DergiPark". Journals.istanbul.edu.tr. Retrieved 2014-06-24.
93. ^ "Linear Algebra : Hussein Tevfik : Free Download & Streaming : Internet Archive". Archive.org. Retrieved 2014-06-24.
94. ^ Ricci Curbastro, G. (1892). "Résumé de quelques travaux sur les systèmes variables de fonctions associés à une forme différentielle quadratique". Bulletin des Sciences Mathématiques 2 (16): 167–189.
95. ^ Voigt, Woldemar (1898). Die fundamentalen physikalischen Eigenschaften der Krystalle in elementarer Darstellung. Leipzig: Von Veit.
96. ^ Poincaré, Henri, "Analysis situs", Journal de l'École Polytechnique ser 2, 1 (1895) pp. 1–123
97. ^ Whitehead, John B., Jr. (1901). "Review: Alternating Current Phenomena, by C. P. Steinmetz" (PDF). Bull. Amer. Math. Soc. (3rd ed.) 7 (9): 399–408. doi:10.1090/s0002-9904-1901-00825-7.
98. ^ There are many editions. Here are two:
99. ^ Ricci, Gregorio; Levi-Civita, Tullio (March 1900), "Méthodes de calcul différentiel absolu et leurs applications" (PDF), Mathematische Annalen (Springer) 54 (1–2): 125–201, doi:10.1007/BF01454201
100. ^ Zermelo, Ernst (1904). "Beweis, dass jede Menge wohlgeordnet werden kann" (REPRINT). Mathematische Annalen 59 (4): 514–16. doi:10.1007/BF01445300.
101. ^ Wikisource link to On the Dynamics of the Electron (July). Wikisource.
102. ^ Fréchet, Maurice, "Sur quelques points du calcul fonctionnel", PhD dissertation, 1906
103. ^ Cuthbert Edmund Cullis (Author) (2011-06-05). "Matrices and determinoids Volume 2: Cuthbert Edmund Cullis: Amazon.com: Books". Amazon.com. Retrieved 2014-06-24.
104. ^ Can be assigned a given matrix: About a class of matrices. (Gr. Ueber eine Klasse von Matrizen: die sich einer gegebenen Matrix zuordnen lassen.) by Isay Schur
105. ^ An Introduction To The Modern Theory Of Equations. By Florian Cajori.
106. ^ Proceedings of the Prussian Academy of Sciences (1918). Pg 966.
107. ^ Sitzungsberichte der Preussischen Akademie der Wissenschaften (1918) (Tr. Proceedings of the Prussian Academy of Sciences (1918)). archive.org; See also: Kaluza–Klein theory .
110. ^ Schouten, Jan A. (1924). R. Courant, ed. Der Ricci-Kalkül – Eine Einführung in die neueren Methoden und Probleme der mehrdimensionalen Differentialgeometrie (Ricci Calculus – An introduction in the latest methods and problems in multi-dimmensional differential geometry). Grundlehren der mathematischen Wissenschaften (in German) 10. Berlin: Springer Verlag.
113. ^ The Mathematical Principles of Natural Philosophy, Volume 1. By Sir Isaac Newton, John Machin. Pg 12.
114. ^ In The Scientific Outlook (1931)
115. ^ Mathematics simplified and made attractive: or, The laws of motion explained. By Thomas Fisher. Pg 15. (cf. But an abstraction not founded upon, and not consonant with Nature and (Logical) Truth, would be a falsity, an insanity.)
116. ^ Proposition VI, On Formally Undecidable Propositions in Principia Mathematica and Related Systems I (1931)
117. ^ Casti, John L. 5 Golden Rules. New York: MJF Books, 1996.
118. ^ Gr. Methoden Der Mathematischen Physik
119. ^ P.A.M. Dirac (1927). "The Quantum Theory of the Emission and Absorption of Radiation". Proceedings of the Royal Society of London A 114: 243–265. Bibcode:1927RSPSA.114..243D. doi:10.1098/rspa.1927.0039.
121. ^ F. Bloch; A. Nordsieck (1937). "Note on the Radiation Field of the Electron". Physical Review 52: 54–59. Bibcode:1937PhRv...52...54B. doi:10.1103/PhysRev.52.54.
123. ^ R. Oppenheimer (1930). "Note on the Theory of the Interaction of Field and Matter". Physical Review 35: 461–477. Bibcode:1930PhRv...35..461O. doi:10.1103/PhysRev.35.461.
124. ^ Van der Waerden B.L. (1929). "Spinoranalyse". Nachr. Ges. Wiss. Göttingen Math.-Phys. 1929: 100–109.
125. ^ Veblen O. (1933). "Geometry of two-component Spinors". Proc. Natl. Acad. Sci. USA 19: 462–474. doi:10.1073/pnas.19.4.462.
126. ^ PAM Dirac (1939). "A new notation for quantum mechanics". Mathematical Proceedings of the Cambridge Philosophical Society 35 (3). pp. 416–418. doi:10.1017/S0305004100021162.
127. ^ H. Grassmann (1862). Extension Theory. History of Mathematics Sources. American Mathematical Society, London Mathematical Society, 2000 translation by Lloyd C. Kannenberg.
128. ^ Steven Weinberg (1964), The quantum theory of fields, Volume 2, Cambridge University Press, 1995, p. 358, ISBN 0-521-55001-7
130. ^ S.L. Glashow (1961). "Partial-symmetries of weak interactions". Nuclear Physics 22: 579–588. Bibcode:1961NucPh..22..579G. doi:10.1016/0029-5582(61)90469-2.
131. ^ S. Weinberg (1967). "A Model of Leptons". Physical Review Letters 19: 1264–1266. Bibcode:1967PhRvL..19.1264W. doi:10.1103/PhysRevLett.19.1264.
133. ^ F. Englert, R. Brout (1964). "Broken Symmetry and the Mass of Gauge Vector Mesons". Physical Review Letters 13: 321–323. Bibcode:1964PhRvL..13..321E. doi:10.1103/PhysRevLett.13.321.
134. ^ P.W. Higgs (1964). "Broken Symmetries and the Masses of Gauge Bosons". Physical Review Letters 13: 508–509. Bibcode:1964PhRvL..13..508H. doi:10.1103/PhysRevLett.13.508.
135. ^ G.S. Guralnik, C.R. Hagen, T.W.B. Kibble (1964). "Global Conservation Laws and Massless Particles". Physical Review Letters 13: 585–587. Bibcode:1964PhRvL..13..585G. doi:10.1103/PhysRevLett.13.585.
136. ^ http://www.physics.drexel.edu/~vkasli/phys676/Notes%20for%20a%20brief%20history%20of%20quantum%20gravity%20-%20Carlo%20Rovelli.pdf
137. ^ Bourbaki, Nicolas (1972). "Univers". In Michael Artin, Alexandre Grothendieck, Jean-Louis Verdier, eds. Séminaire de Géométrie Algébrique du Bois Marie - 1963-64 - Théorie des topos et cohomologie étale des schémas - (SGA 4) - vol. 1 (Lecture notes in mathematics 269) (in French). Berlin; New York: Springer-Verlag. pp. 185–217.
138. ^ F.J. Hasert et al. (1973). "Search for elastic muon-neutrino electron scattering". Physics Letters B 46: 121. Bibcode:1973PhLB...46..121H. doi:10.1016/0370-2693(73)90494-2.
139. ^ F.J. Hasert et al. (1973). "Observation of neutrino-like interactions without muon or electron in the gargamelle neutrino experiment". Physics Letters B 46: 138. Bibcode:1973PhLB...46..138H. doi:10.1016/0370-2693(73)90499-1.
140. ^ F.J. Hasert et al. (1974). "Observation of neutrino-like interactions without muon or electron in the Gargamelle neutrino experiment". Nuclear Physics B 73: 1. Bibcode:1974NuPhB..73....1H. doi:10.1016/0550-3213(74)90038-8.
141. ^ D. Haidt (4 October 2004). "The discovery of the weak neutral currents". CERN Courier. Retrieved 2008-05-08.
142. ^ http://web.physics.ucsb.edu/~gary/
143. ^ Nuclear Physics B 258: 46–74, Bibcode:1985NuPhB.258...46C, doi:10.1016/0550-3213(85)90602-9
144. ^ De Felice, F.; Clarke, C.J.S. (1990), Relativity on Curved Manifolds, p. 133
145. ^ "Quantum invariants of knots and 3-manifolds" by V. G. Turaev (1994), page 71
146. ^ Pisanski, Tomaž; Servatius, Brigitte (2013), "2.3.2 Cubic graphs and LCF notation", Configurations from a Graphical Viewpoint, Springer, p. 32, ISBN 9780817683641
147. ^ Frucht, R. (1976), "A canonical representation of trivalent Hamiltonian graphs", Journal of Graph Theory 1 (1): 45–60, doi:10.1002/jgt.3190010111
148. ^ Fraleigh 2002:89; Hungerford 1997:230
149. ^ Dehn, Edgar. Algebraic Equations, Dover. 1930:19
150. ^ "The IBM 601 Multiplying Punch". Columbia.edu. Retrieved 2014-06-24.
151. ^ "Interconnected Punched Card Equipment". Columbia.edu. 1935-10-24. Retrieved 2014-06-24.
152. ^ Proceedings of the London Mathematical Society 42 (2)
External links[edit] |
b2e1560459d1ef76 | ATD 219-242
Page 219
they would have little clue . . . their more or less ambushed keesters
One of half a dozen Pynchonian circumlocutions for "wouldn't know [blank] if it bit them in the ass."
The Tetractys
True Worshippers of the Ineffable Tetractys
The Tetractys is a triangular figure consisting of ten points arranged in four rows: one, two, three, and four points in each row. As a mystical symbol, it was very important to the followers of the secret worship of the Pythagoreans, Kabbalists, and nutbars of other affiliations since. It has all kinds of symbological meaning, including the four elements, the organization of space, the Tarot, etc. Wikipedia entry;
In the Pythagorean tetractys — the supreme symbol of universal forces and processes — are set forth the theories of the Greeks concerning color and music. The first three dots represent the threefold White Light, which is the Godhead containing potentially all sound and color. The remaining seven dots are the colors of the spectrum and the notes of the musical scale. The colors and tones are the active creative powers which, emanating from the First Cause, establish the universe. The seven are divided into two groups, one containing three powers and the other four a relationship also shown in the tetractys. The higher group — that of three — becomes the spiritual nature of the created universe; the lower group — that of four — manifests as the irrational sphere, or inferior world. [1]
This division (three/four) has to be related to the "trivium" (grammar, rhetoric, logic) and "quadrivium" (arithmetic, geometry, music, astronomy) of the Medieval liberal arts.
More effably, if you flip the Tetractys left to right, it gives the positions of the pins in ten-pin bowling.
The acronym T.W.I.T is most appropriate: a twit is an ineffectual buffoon. Neville and Nigel are certainly twits.
I believe the above really misses the Big Symbol, i.e., Pynchon's linking of T.W.I.T. with the vagina, i.e., the female sex organ. "T.W.I.T." sounds like — no, is — a cross between "clit" and "twat." And, natch, it's headed up by Nookshaft. And, let's face it, that tetractys is surely an inverted beaver, yes? (See "Beavers of the Brain"). Its male counterpart is Candlebrow U., to be encountered down the road apiece (and that ain't no spoiler!).
"The Tetractys" is also the name of a poem by the Quaternionist prophet Hamilton. I can't imagine Pynchon didn't find it fairly interesting reading. Read it for yourself here
Chunxton Crescent
Invented by Pynchon. "Crescent" is a female symbol in many mythologies and cultures, and it reinforces T.W.I.T.'s association with the female sex. But "Chunxton"?
The moon is seen as a female symbol, and was worshipped in ancient times as a powerful force. It is believed to be linked to the unconscious and our feminine side. The sacredness of the moon has been connected with the basic cyclic rhythms of life. The changing phases of the moon were linked to the death and rebirth seen in crops and the seasons, and also to the female monthly cycle that controls human fertility. The moon calendar is still important and many festivals exist around the lunar phases. [1]
Eliphas Levi's Baphomet
The crescent is also said "to represent silver (the metal associated with the moon) in alchemy, where, by inference, it can also be used to represent qualities that silver possesses." (Alchemy and Symbols, By M. E. Glidewell, Epsilon.)
Additionally, the crescent was an important symbol for Eliphas Levi, occultist, magician, and spiritual antecedent to the Hermetic Order of the Golden Dawn, and, in turn, the T.W.I.T.
Chunxton may be derived from "chunk stone" or "chunk(s) town." I'm inclined to favor the first. "Chunk stone" has two main meanings: (1) stone that's quarried in chunks instead of blocks, slabs or crystals; (2) a magical stone that figures in some American Indian stories. Turquoise and amethyst chunk stones are often made into jewelry as-is, or larger chunks of (say) marble can be used as decoration. Here are links to two Indian stories in which people use chunk stones in finding or tracking: first, second. Of course it's also possible that "chunk" is the verb meaning "throw," in which case there ought to be a "glass houses" connection somewhere; I can't find it.
Pure speculation here, but our own moon is a giant "chunk" of "stone". And how did that "chunk" get there? Well, this being Thomas Pynchon's universe, sometime early in the solar system's history, this proto-planet called Orpheus comes along and smacks into the Earth so violently that it not only creates the moon, but at the same time expels enough water and gas to make "it possible for life on Earth to evolve as we currently know it." Seems to me like something worthy of Occultist reverence
In CoL49, TRP states at least twice that the Pacific Ocean is "the hole left by the moon's tearing-free and the monument to her exile." (The Crying of Lot 49, p.41)
"Tyburnia occupies the ground on the north side of Hyde-park and Kensington-gardens, and stretches from Edgware-road on the east to about Inverness-terrace on the west. This is not, strictly speaking, a fashionable quarter; but it is not absolutely unfashionable, and is a very favourite part with those — lawyers, merchants, and others—who have to reside in town the greater part of the year." Charles Dickens (Jr.), Dickens's Dictionary of London, 1879.
Sir John Soane
(1753 – 1837) was an English architect who specialised in the Neo-Classical style. Wikipedia entry
Madame Blavatsky
Helena Petrovna Blavatsky (1831-1891), Russian-born founder of the Theosophical Society. Madame Blavatsky claimed that all religions were both true in their inner teachings and false or imperfect in their external conventional manifestations. Wikipedia
Theosophical Society Seal
Theosophical Society
The Theosophical Society was founded in New York City, USA, in 1875 by H.P. Blavatsky, Henry Steel Olcott, William Quan Judge and others. Its initial objective was the investigation, study and explanation of mediumistic phenomena. After a few years Olcott and Blavatsky moved to India and established the International Headquarters at Adyar, Madras (Chennai). There, they also became interested in studying Eastern religions, and these were included in the Society's agenda. Wikipedia entry "Its post-blavatskian fragments" refers to the schism that occured between some of the founding members after the passing of H.P. Blavatsky in 1891.
Society for Psychical Research
The Society for Psychical Research (SPR) is a non-profit organization which started in the United Kingdom and was later imitated in other countries. Its stated purpose is to understand "events and abilities commonly described as psychic or paranormal by promoting and supporting important research in this area" and to "examine allegedly paranormal phenomena in a scientific and unbiased way."[1] It was founded in 1882 by a group of eminent thinkers including Edmund Gurney, Frederic William Henry Myers, William Fletcher Barrett, Henry Sidgwick, and Edmund Dawson Rogers. The Society's headquarters are in Marloes Road, London. Wikipedia entry
Rosy Cross of the Golden Dawn
Order of the Golden Dawn
The Hermetic Order of the Golden Dawn (or, more commonly, the Golden Dawn) was a magical order of the late 19th and early 20th centuries, practicing a form of theurgy and spiritual development. William Wynn Westcott, also a member of the Theosophical Society, appears to have been the initial driving force behind the establishment of the Golden Dawn. See also the aforementioned schism within the Theosophical Society. Wikipedia entry
of whom there seemed an ever-increasing supply
Supply of seekers, not of "arrangements." (Well, this contributor read it wrong . . . twice.)
century had rushed . . . out the other side
An instant of zero, not a whole year, because they aren't yet "out the other side" of 1900. ??? A century is 100 years. The one referred to here lasted from 1800-1899 and, since it's 1900, it has "rushed to its end."
Missing the point. The image focuses on the zero. And please, let's not have that sterile argument about when a century begins!
Don't know if this is of any significance, but in the Tarot the Fool (or Jester), says Wikipedia, is "often numbered 0." [2]
Page 220
not even if that tartan were authentic
It's a solecism in England, but is (or was—at least until well up in the 19th century) a prosecutable offense in Scotland, to wear the tartan of a clan one doesn't belong to. At the time of the action, Lew's offense against taste is not to wear tartan (see below in this entry) but to wear a tartan he isn't entitled to wear.
The previous statement doesn't quite jibe. In the late 17 cent. it was prosecutable for any Scot (read Highlander) to wear a tartan. Those tartans we see ascribed to clans were creations made to please Queen Victoria. Tartans and the Kilt are from Scottish and Irish Clans; from the oppressed. Thus, the fun in the line comes from the fact that an authentic tartan was false to begin with, but that doesn't keep Nigel from lording the fact that Lew's argyle sox are not up to snuff.
Kilts came from an earlier garment which covered more of the body than today's piece, and those in plaid were called Breacan, meaning partially colored or speckled. The plaids also came in trews (trousers), and ruanas (shawls). Many had uniformity in design, but probably because those were the colors available and thus recognized as part of a family, clan or sept.
Caen stone
A cream-colored limestone for building, found near Caen, France.
a primitive wind instrument consisting of several parallel pipes bound together; panpipes.
an ancient form of harp, so syrinx and lyre are like flute and harp. A famous Concerto for flute and harp is the work of G. F. Handel, who also composed the Messiah.
Ten sideshow acts for one admission. Wikipedia
Also, a description of the Tetractys.
masses of shadow . . . bright presences
We've had suggestions, at least, that shadow is more hospitable than brightness.
humans reincarnated as cats, dogs, and mice
Do the T.W.I.T. members just take the word of the creatures, or do they have some way to be sure?
Nicholas Nookshaft
Grand Cohen Nicholas Nookshaft's name reinforces the linking of T.W.I.T. to the female sex organ, "Nooky shaft" being a vulgarism for the vagina. Interestingly, "shaft" is both a rod or pole, or penis, as well as a vertical passageway, thus its connations are bisexual.
Anyone familiar with Ceremonial Magick is aware of Aleister Crowley. Crowley was famously bisexual, responsible for one of the most famous Tarot Decks — the "Thoth" deck — and was involved in spycraft for British Intelligence and, it is rumored, was a double agent for the Germans as well. Nicholas Nookshaft is a parody of Crowley.
Actually, given the chronology and the alliterative name, this is much more likely a parody of MacGregor Mathers. Mathers was the head of the Golden Dawn from 1896 or so until 1900--Crowley never was. Furthermore, Tarot references in AtD do not follow the names from Crowley's Thoth deck; Crowley renamed certain cards, and those names are not the ones used in AtD (i.e. in the Thoth deck, the "Temperance" card is renamed "Art").
Grand Cohen
'Cohen' is Hebrew for 'priest'.
Page 221
Couldn't have been the same world as the one you're in now
We can infer that Lew got blown up in one world and shifted to another. A review of the explosion episode, particularly with the annotations to p. 188, will be worthwhile.
"Lateral world-sets, other parts of the Creation, lie all around us, each with its crossover points or gates of transfer from one to another, and they can be anywhere, really."
Could this be the explanation for some of the most inexplicable scenes from the book thus far: Lew Basnight's mysterious offense, causing him to lose his wife, and his first encounter with the Drave group (around page 39); and Hunter Penhallow's escape from the mysterious creature (around page 154)? Parallel worlds?
Yashmeen Halfcourt
Her initials YH are the first half of the Tetragrammaton -- YHVH or YHWH in English.
seventeenth degree Adept
Masonic and other esoteric mystery schools have differing number of degrees. Attaining a degree shows that one has sufficiently mastered the material, undergone the tests and passed through any initiations involved with that degree.
The Masonic system has three degrees. These are extended to 32 in the Scottish Rite and a 33rd degree is the ultimate akin to a Distinguished Service award. By comparison, the Golden Dawn has 11 degrees divided in three orders; and the Order of the Temple of the East (Order Templi Orientis, O.T.O) has 12. In TWIT, the 17th appears to be the final degree where one becomes a Master TWIT or a Grand TWIT, I suppose.
Why 17 degrees? Other than 17 being prime, there seems to be no symbolic or geometric significance to 17. Since the Crowley-associated systems do not reach 17, whereas the Masonic system does, looking to the Masonic A & A Scottish Rite 17th degree we find it is the "Knight of the East and West" which teaches that loyalty to God is man's primary allegiance, and the temporal governments not founded upon God and His righteousness will inevitably fall. Compare this to the Bogomils later in AtD.
On the other hand, T.W.I.T. is centered on Tarot cards, so the relationship between number and any correspondences to the Tarot would be very much to the point. In this case, the Major Arcana assigned to the number 17 is the Star. The Crowley-associated system for Tarot consists of the Thoth Tarot deck, along with Crowley's "explanatory" 'Book of Thoth':
As has been explained elsewhere. It refers to the Zodiacal sign of Aquarius, the water-bearer. The picture represents Nuith, our Lady of the Stars. For the full meaning of this sentence it is necessary to understand the first chapter of the Book of the Law. . . .
The full text can be found at's site on the Thoth "Star" card, albiet with the wrong card illustrated, in this case atu 18, "The Moon".
Symbolic and Cultural Meanings of 17:
Because 17 has no symbolic significance, it does! In The Illuminatus! Trilogy, the symbol for Discordianism includes a pyramid with 17 steps because 17 has "virtually no interesting geometric, arithmetic, or mystical qualities."
In the Harry Potter universe, 17 is the coming of age for wizards.
Described at MIT as 'the most random number', according to hackers' lore. This is supposedly because in a study where respondents were asked to choose a random number from 1 to 20, 17 was the most common choice.
The number of syllables in a haiku (5+7+5).
The number of special significance to Yellow Pig's Day and Hampshire College Summer Studies in Mathematics.
and on and on.....
A righteous Jew. Wikipedia "One whose merit surpasses his iniquity." The Talmud says that at least 36 anonymous tzadikim are living among us at all times; they are anonymous, and it is for their sake alone that the world is not destroyed.
The common theme between the Masonic 17th degree and Tzaddik seems to be righteousness.
Page 222
The Tetractys isn't the only thing round here that's ineffable
Schoolyard joke. "F" a euphemism for fuck, so "ineffable" = unfuckable also describes Yashmeen.
squadron commander
A squadron of hussars would number 100-200 troopers commanded by a major. (The linked page concerns Baden-Powell's regiment—the 13th, not the 18th—in the South African War.)
Auberon Halfcourt
Auberon means royal or noble bear.
Punning, "Au" is the chemical symbol for gold, thus, "Golden Bear", mascotte of UC Berkeley.
Eighteenth Hussars
Prestigious British cavalry regiment. Stationed in India 1864-76 and 1890-98; Halfcourt's secondment must have taken place at one of these times.
Summer capital of the British Raj in India in the Himalayas. Wikipedia.
A terminus of the Kalka-Simla railway line (built 1906) aka the "British Jewel of the Orient."
Named for the goddess Shyamala Devi, an incarnation of the Hindu Goddess Kali.
Smartly taken at silly point
A cricketing reference. Silly point is a fielding position very close to the batsman. examples
With the infield pulled in for a bunt, the cleanup man swings away and pulls a clothesline drive between third and short, which the shortstop snares for the out. (Teach you lot to come over here and talk cricket.)
To know, to dare, to will, to keep silent
Mystical formula. examples The four precepts of Western Magick, extensively discussed in the writings of Aleister Crowley.
In the States, "detective" doesn't mean—
. . . An agent who solves criminal cases. The major "detective" bureaus hired personnel out as bodyguards and muscle.
"There is but one 'case' which occupies us"
This echoes the famous quote from Wittgenstein's Tractatus Logico-Philosophicus: "The world is all that is the case." (See the full text of the Tractatus here.) This quote also factors in heavily in V. (Specifically, in two places: there's the P's and Q's love song, and also in Captain Weissman's repeating, encoded, hallucinated message over the telegraph in Africa.)
The Number 22
I found it interesting that the significance of the number 22 was first brought up on page 222. might be nothing, really. 22 is the number of cards in the Major Arcana of the Tarot deck, the section of the deck that has been removed from the modern playing deck which only has the suits (elements) and the Court cards. The 22 Major Arcana are numbers 0 to 21 and move from The Fool card to the Universe. Purportedly and symbolically, the progression of cards tell a tale of the evolutionary path of the Soul in its course. The 22 cards also, in some systems, map onto the 22 paths that connect the spheres of the Kabalistic Tree of Life (which also is mentioned in this chapter). An understanding of the Tarot cards cannot be achieved with an understanding how they relate to the Tree of Life. They are the relationships between the Sephiroths which in turn at 10 in number, just like the Tetractys and portray the energies that flow from the highest monad of Divinity (Kether) down into the manifested world (Malkuth). Pynchon makes use of both the Tarot and the Kabalah in Against the Day as well as Gravity's Rainbow.
See also the novel The Greater Trumps by Charles Williams for a similar intrusion of the characters of the Major Arcana into everyday English life.
22 is two times two, so a quaternion...
Page 223
"And the crime... just what would be the nature of that?"
Might Lew himself be one of the 22 suspects? Perhaps the ineffable crime is what made people treat him like a pariah earlier in the book.
Page 224
"'walking out'"
A walking date.
the veil of maya
In Hinduism, maya is the phenomenal world of separate objects and people, which creates for some the illusion that it is the only reality. In Hindu philosophy, maya is believed to be an illusion, a veiling of the true, unitary Self. Many philosophies or religions seek to "pierce the veil" in order to glimpse the transcendent truth. Arthur Schopenhauer used the term "Veil of Maya" to describe his view of The World as Will and Representation. Wikipedia entry
the ancient London landscape . . . known to the Druids
Peter Ackroyd's recent London, the Biography devotes many pages to sacred and magical features of the city. "Druid".
London's royal barbers since 1875. site
And what other barber would you mention in a passage about the Greater Trumps . . . .
On this island [...] all English, spoken or written, is looked down on as no more than strings of text cleverly encrypted
A sentiment echoed in the first sentence of Pynchon's December 2006 letter written in defense of novelist Ian McEwan: "Given the British genius for coded utterance..." Image of Letter
crosswords in newspapers
The first crossword to appear in a newspaper was in 1913. Cryptic crosswords in British newspapers certainly match Pynchon's description. See, for example, the Listener crossword.
Page 225
Girton College
Of Cambridge University, for women, founded 1869. history
Next they'll be letting you folks vote.
Women over the age of 30 were, subject to certain qualifications, granted the right to vote in the UK by the Representation of the People Act 1918. The Representation of the People (Equal Franchise) Act 1928 granted women the vote on the same basis as men (i.e. from the age 21).
"the vast jangling thronged somehow monumental London evening"
This kind of eschewing of punctuation might be expected in Joyce but it's not typical of Pynchon and seems to serve no special purpose here. A typo?
Purposive or no, that ain't no typo. First, numerous compound adjectives reminiscent of Faulknerian portmanteau words are sprinkled throughout the book. Second, this particular deployment of zero-degree punctuation and massing of modifiers jibes with TRP's obvious delight in tripping us readers up and sending us back into sentences for another looksee. Finally, the musicality of this phrase sounds properly Pynchonlike t'me.
Pamela Colman Smith
Illustrator of the Rider-Waite-Smith Tarot deck "Wikipedia".
Arthur Edward Waite
Occultist and co-creator of the Rider-Waite Tarot deck. Wikipedia
four stone
56 pounds.
Ucken-fay is "pig latin" for 'fucken'.
gaver du visage
A literal translation of "stuff one's face", though this is not how it is said in French (it would be se gaver or se baffrer). cite
A smoking salon (divan) for cigar smokers.
Interestingly, a work by Robert Louis Stevenson, from 1903, entitled The Dynamiter begins with a "Prologue of the Cigar Divan".
Page 226
Seven Dials
bad area in London, see Wikipedia entry
The Devil by Colman-Smith
Four-wheeled carriage drawn by four horses. Supplanted by the Hansom cab.
Renfrew at Cambridge and Werfner at Göttingen
Note that each Professor's name is the other's spelled backward.
Also notice the theme of dual natures or forces. The two professors are "bound and ... could not separate even if they wanted to." They become rivals within the broader conflict of the 'Great Game' -- the political rivalry over Central Asia being played out by the various European powers, but especially by Great Britain and the Russian Empire.
Pynchon toys with the idea that World War I was really just the extension of an academic rivalry. This secret scholastic conspiracy also references the role supposedly played in the US policy establishment by neoconservatives [3] (or "neocons") in the run up to the US invasion of Iraq in 2003. Just as Pynchon's professors held great influence over a number of their students, "[s]ome of whom found employment with the Foreign Services", etc., neoconservative professors such as Leo Strauss [4] had a number of disciplines who came to occupy key positions in government and business (for example, Deputy Secretary of Defense (2001-2005) Paul Wolfowitz [5]). This interpretation is further bolstered by the geographic positioning of the "Bagdad" (sic) railway, and the Ottoman territories as the region "where Renfrew and Werfner have often found their best opportunities to make mischief".
Cambridge University is one of the oldest and the best universities in the world. In 2009 it will be celebrating its 800th Anniversary. In its early day, Cambridge was a center of the new learning of the Renaissance and of the theology of the Reformation; in modern times it has excelled in science. It is now a confederation of 31 Colleges (such as King's, Girton, St.John, Trinity and others mentioned in ATD), consists of over 100 departments and faculties, and other institutions. Since 1904, 81 affiliates of Cambridge have won Nobel Prize in every category: 29 in Physics, 22 in Medicine, 19 in Chemistry, 7 in Economics, 2 in Literature and 2 in Peace.
Göttingen University, one of the most famous universities in Europe, founded in Göttingen, Germany, in 1737 by King George II of England in his capacity as Elector of Hanover. At the end of the 19th century, it became world famous because of its Departments of Mathematics and Physics and rivaled Cambridge for eminence. The reputation of the university was founded by many eminent professors who are commemorated by statues and plaques all over the campus. It claimed 44 Nobel Laureates. But it suffered from the 1933 Great Purge of the Nazi crackdown on "Jewish Physics" and never recovered its original fame. David Hilbert, one of the greatest mathematicians of the 20th century and a professor at Göttingen, was asked in the 40s about the state of mathematics there now that the Jewish influence had been purged; he replied that there was no mathematics left at Göttingen now that the Jewish influence had been purged.
Berlin Conference of 1878
Divided Balkans after Russo-Turkish War. Wikipedia
bickering-at-a-distance A play on the idea of "action at a distance" theories in physics, a topic that came under much scrutiny at the time of AtD owing to its pertinence in the theories of electromagnetism and gravitation. See wikipedia for a further discussion and its relevance in quantum mechanics.
English, . . . , Japanese—not to mention indigenous—components
Not to mention them was exactly the point as the Great Powers sorted out the Ottoman possessions.
Page 227
"The Great Game"
The Great Game was a term used to describe the rivalry and strategic conflict between the British Empire and the Tsarist Russian Empire for supremacy in Central Asia. The term was later popularized by Rudyard Kipling in his novel, Kim. The classic Great Game period is generally regarded as running from approximately 1813 to the Anglo-Russian Convention of 1907. Wikipedia entry Also the name of Padzhitnoff's airship.
I believe the great game stands for Espionage in the Age of Gentlemen, the substance of Pynchon's Under the Rose.
mamluk lamps
A mosque lamp from the mamluk era.
...the Kabbalist Tree of Life, with the names of the Sephiroth spelled out in Hebrew, which had brought her more than enough of that uniquely snot-nosed British anti-Semitism...
Kabbalah is the ancient study of Jewish mysticism, long shrouded in mystery and kept from all but a devout few of the most dedicated Talmudic scholars. The Tree of Life is one of the central symbols of Kabbalah, supposedly a physical representation of the path of enlightenment from the most base knowledge of the physical world (at the bottom), to the highest spiritual planes of understanding (at the top). The Sephiroth are the nodes of the Tree, representing the various "stages" of understanding. Of course, tihs is all a very gross oversimplifcation and hardly does justice to the term itself.
The "Quabbalah" or "Cabalah" being studied by Madonna and others in Hollywood is a secularized and co-opted form of the original Kabbalah, which is deeply connected to the Torah and Jewish life.
In Medieval Europe, Kabbalist scholars wore amulets and other symbols on their clothing, and were often misunderstood to be magicians or wizards (think Merlin). The common magician's expression "abra cadabra" has Kabbalistic origins.
"Eskimoff . . . I say what sort of name is that?"
Tiptoeing around the real question, "Is she Jewish?"
English Rose
The phrase "English Rose" or "Bonnie English Rose" when applied to a woman means her skin is unblemished, her coloring subtle, her temper sweet. Madame Eskimoff, in short, is a beauty in a traditional English style.
(Incidentally, an officially unrecognized designation of roses.)
Page 228
Oliver Lodge
English physicist, inventor and writer (1851-1940) involved in the development of wireless telegraphy and radio. After the death of his son in 1915, Lodge became interested in spiritualism and life after death and wrote several books on the subject. Lodge conducted research on lightning, electricity, electromagnetism and wrote about the aether, themes that are repeated throughout ATD. Wikipedia entry.
William Crookes
English chemist and physicist (1832-1919) who worked in spectroscopy and whose work pioneered the construction and use of vacuum tubes. Like Oliver Lodge, Crookes was also a spiritualist, which appears to be Pynchon's reason for grouping him with others in this passage, although his experiments in electricity and light also tie in with these themes in ATD. Wikipedia entry.
Mrs. Piper
Probably Leonora Piper, 1857-1950. Wikipedia entry.
Eusapia Palladino
(1854-1918) Famous italian spiritualist medium. Wikipedia entry. It's fair to say she was often caught cheating.
W.T. Stead
William T. Stead (1849-1912), British writer, poet, social crusader, and spiritualist. He went down with the Titanic. Wikipedia entry.
Mrs. Burchell
The Yorkshire Seeress, investigated by WT Stead. cite
Trouble with the time here. Lew's timeline points pretty strongly to autumn 1900. A séance that's "about to" go on Mme. Eskimoff's résumé, however, leads the murder of the Serbian king and queen by three months, and the murder itself occurred in June 1903, which seems to imply March of that year.
This seems as good an instance as any to question the insistence of some here to pin down the exact date (and season?). Pynchon doesn't knock it to the wall, doesn't find cause to bother and I think the reason for that is obvious... the ambiguity lends a freer hand with which to paint. So don't fuck with the butterfly on the wheel.
Alexander and Draga Obrenovich, the King and Queen of Serbia
According to Wikipedia the assassination occured on 11 June 1903, so the seance at which Mrs. Burchell "witnessed" it, should have taken place in March 1903.
Parsons-Short Auxetophone
pic and info. The Auxetophone appears to have been a sound amplification device, not a recorder. Parsons did not enter the picture till 1903, so the apparatus would not have this name in 1900, but Short demonstrated it as early as 1898.
electros of the original wax impressions
A thin film of metal was electroplated onto the wax, then peeled off and wrapped around a new cylinder.
"Bagdad" railway
Page 229
A term used in both engineering and psychology. Psychology: "Characterized by a high degree of emotional responsiveness to the environment." Electricity: "Of or relating to two oscillating circuits having the same resonant frequency."
The syntonic comma, a small interval in the frequency ratio of 81:80, is a problem in musical temperament.
the Russo-Turkish War
The Russo-Turkish War (1877-1878), the latest Russo-Turkish War of many fought between these two contries since 16th century as a result of Russian attempts to find an outlet on the Black Sea and to conquer the Caucasus, dominate the Balkan Peninsula, gain control of the Dardanelles and Bosporus straits, and retain access to world trade routes. The last Russo-Turkish War came as a result of the anti-Ottoman uprising (1875) in Bosnia and Herzegovina and Bulgaria. On Russian instigation, Serbia and Montenegro joined the rebels; after securing Austrian neutrality, Russia openly entered the War in 1877. The War ended in 1878 resulted in the Treaty of San Stefano which so thoroughly revised the map in favor of Russia and her client, Bulgaria, that the European powers called a conference (the Congress of Berlin) to revise its terms by the Treaty of Berlin.
kilometric guarantee
Money offered by the government to building companies. Apparently, the railroad companies fooled the Ottoman Empire building trails which were much longer than needed. Google books citation
Page 230
King's... Girton
King's College is one of the most famous and historic colleges at Cambridge, founded in 1441. Girton College, Cambridge, was established in 1869 as the first residential college for women in England.
Michaelmas term
The fall term, starting early October (1900 here). Wikipedia
A between-maid.
Edward Oxford
attempted to shoot Queen Victoria and her husband, Prince Albert, at the time of her first pregnancy (1840).Wikipedia
had the young Queen died then without issue
Nookshaft posits two scenarios: (1) The implicit, unmentioned, and not as "interesting" possibility that everything is actual, as it "appears" to be in the "real" world, surrounding Queen Victoria; that she is simply an old, vain regent. (2) "the 'real' Vic is elsewhere," and the current, aged Victoria is a ghostly stand-in. Nookshaft implies that this figure is a proxy or puppet of Ernst-August. If this were "the case," then the question shifts to the following: (a) Is the ruler of the underworld, who holds the "real," eternally young Victoria captive in cahoots with Ernst-August in the "real" world? or: (b) Is the ruler of the underworld, who holds the "real," eternally young Victoria captive NOT in cahoots with Ernst-August, who nevertheless ascends to the throne with real-Vic out of the way, and imposes the stand-in? In which case: What would be the motivation of the underworld-entity third-party? And who, or what, specifically, is it?
sixty years ago
One event of 1840, the attempt on Victoria's life, is referred to as sixty years ago; another, the issue of the first adhesive stamps, as more than sixty years ago.
If it weren't for these nagging problems in Lew's timeline, we could peg the date as 1900.
Salic law
originated in the Late Roman Empire as Germanic tribes invaded and their law codes were translated into Latin and written down. Salic Law was that of the Franks who settled in present-day northern France and the law code of Charlemagne. Over the course of the Middle Ages it was largely replaced by Roman Law. For examples, see [6].
However, Salic Law continued to be used in a number of European areas to decide matters of noble inheritance. Specifically, Salic Law stated that no female could inherit rulership (above by Owl of Minerva 18:03, 4 April 2007 (PDT)) and, indeed, a royal or noble title could be inherited only through the "male line." When King William IV, ruler of both the United Kingdom and Hanover, died, the Crowns separated. Hanover practiced Salic law, while Britain did not. King William's niece Victoria ascended to the throne of Great Britain and Ireland, but the throne of Hanover went to William's brother Ernest Augustus, Duke of Cumberland. Wikipedia entry
Tory despotism
Not necessarily-- it describes Ernest himself. "The Duke of Cumberland had a reputation as one of the least pleasant of the sons of George III. Politically an arch-reactionary, he opposed the 1828 Catholic Emancipation Bill proposed by the government of the Prime Minister, the Duke of Wellington." Wikipedia entry
It can describe Ernst August and still be an allegory of Thatcher. The description of Ireland fits that of some world-views during her time.
All parallels between past and present are worth considering. They don't have to be direct references. The present-day Ernst August - famous for pissing on the Turkish Pavilion at EXPO 2000 - carries on the family tradition.
Someone famously cited James Joyce as proof that Catholics shouldn't get university educations.
Page 231
Orange Lodges
Lodges of the Orange Order, a protestant fraternal organisation based predominantly in Northern Ireland and Scotland. Wikipedia entry. The Orange Order was founded to subvert the United Irishmen of Wolfe Tone by agitating against Protestant and Catholic community. It was hostile to the idea of Irish Home Rule or independence. In the 1880's it developed the Ulster Unionist Party to politically parry Parliamentary attempts at Home Rule for Ireland.
"from the first to the twelfth of July, anniversaries of the Boyne and Aughrim."
i.e. anniversaries of the Battle of the Boyne and the Battle of Aughrim of the Williamite War in Ireland.
This was and still is known as "Marching Season" in Northern Ireland; the time when 'parades' are traditionally a source of fear and violence. Nearly all the parades are organized by the Orange Lodges and hence anti-Catholic.
The first adhesive stamp, 1840
"the first adhesive stamps of 1840"
This stamp has come to be called the Penny Black. Wikipedia entry
Penny Black is also the name of a character (p.18)
"immune to Time, [...] neither of them aging"
Cf Oscar Wilde's only novel The Picture of Dorian Gray, in which Dorian Gray remains young while his portrait ages.
Cf Stray's pregnancy, a "dreamy thing" (page 201). The definition of springtide is springtime.
Page 232
Éliphaz Lévi
A/K/A Eliphas Levi, nom de plume of Alphonse Louis Constant (1810-1875), French occultist and writer who pioneered a revival of Magick in the 19th Century, and was an influence on A.E. Waite, the Order of the Golden Dawn, and Aleister Crowley. An acquaintance of novelist Edward ("It was a dark and stormy night") Bulwer-Lytton. Wikipedia entry.
Punter is being used in the sense of someone who bets, someone who is taking a chance. Or more probably in the common extended sense meaning merely "customer"
Greek: things heard. Good information under "A" in the alpha index.
number twenty-four
Or 25? etext (According to a Greek version, number 4 in the etext above is not included in Iamblichus' list. If my source is correct, Pynchon is right.)
(ca. 245 - ca. 325, Greek) was a neoplatonist philosopher who determined the direction taken by later Neoplatonic philosophy, and perhaps western Paganism itself. He is perhaps best known for his compendium on Pythagorean philosophy.Wikipedia
Make-up, cosmetics; the application of make-up (especially in heavy or theatrical fashion).[2]
Page 233
Inflammation of a mucous membrane; usually restricted to that of the nose, throat, and bronchial tubes, causing increased flow of mucus, and often attended with sneezing, cough, and fever; constituting a common ‘cold’.[3]
Collis Brown's Mixture
Contained morphine, chloroform, and caramel, among other things. Full ingredients (Previous link not working. For info try here.)
Xylene abuse is similar to "glue sniffing"-- xylene is a strong solvent able to cause several damages to health, especially to the brain. wikipedia
a thousand pounds a year
Over $100,000 today. cite
Condy's fluid is pink to purple. Methylated spirits is a kind of denatured alcohol: 95% ethyl alcohol, 5% methyl alcohol. "Pinky" would have a variety of effects, very possibly including blindness.
Page 234
Condy's fluid
A disinfectant used to treat and prevent Scarlet Fever, among other things. Wikipedia
tonight's the night
Considering the content here, probable reference to Neil Young's drug-addled album and its title song, "Tonight's The Night" from 1975. Wiki
an important market street in the City of London.
A street originally for stabling; but in modern times often converted into houses/apartments.
Coombs de Bottle
"comes the bottle" ?
Russian duck
Duck is strong, untwilled linen or cotton, lighter and finer than canvas. Russian duck is coarse, heavy and unbleached but softer than English duck.
Page 235
sensitive flames
Cf GR p.29-32, 715.
extractors . . . distillation columns
Separatory apparatus. An extractor works on differences in solubility, a distillation column differences in volatility.
tremblers and timers
A trembler is a kind of motion detector used in both bombs and alarms; one kind has a flexible stem with a heavy contact on the free end so that disturbing the package it contains causes a trigger circuit to close. A timer uses a clocklike mechanism to bring two contacts together.
proper solvent procedures
Famous 1960s "Anarchist Cookbook" was infamously inaccurate. Amazon w/author's note
Page 236
Breathless hush in the close tonight
Dr. De Bottle quoting from Henry Newbolt's poem "Vitaï Lampada," which makes school games a metaphor and model for martial bravery.
The Gentleman Bomber of Headingly
Cf Hornung's 'Gentleman Thief' and cricket player, Raffles. info
Reminds me of the Krikkit Robots in Douglas Adams' Life, The Universe, and Everything, where a bomb is put in place of a Cricket Ball at a match between Britain and Australia.
Also, acronymically, the GBH=Grievous Bodily Harm, the British term for felonious assault.
Here and elswhere the spelling of the cricket ground should be 'Headingley'.
The Ashes
An international cricket series between England and Australia dating back to 1882. dates A number of references in this chapter relate to this rivalry. For example, on this page the English cricket ball is compared to the Australian "kookaburra". Kookaburra is the brand name of the balls used in Australia, in England it's Duke. The properties of the English ball was one of the keys to England's success in the summer of 2005. Was Pynchon's writing here influenced by the hype in the UK at the time?
A poison gas used in World War I.Wikipedia
Source of red dye. Wikipedia
A helper, assistant. [4]
Misspelling of exhilaration.
Page 237
beige substance
Presumably Cyclomite.
Happy Birthday! . . . Gemini
Ordinarily you would think this tagged the date as 21 May to 20 June Wikipedia. But other evidence in the text points to deepening autumn.
One of two possible explanations:
1. The T.W.I.T. is perhaps using an ascendent or lunar based astrological system rather than the solar-based system commonly used in the West. This resolves the apparent contradiction of a Gemini in autumn since the ascendent travels through all signs every 24 hours and the moon travels through the entire zodiac once a month. For example, Vedic astrology looks primarily to the ascendent, then the moon, and lastly the sun to study respectively the body, the mind and the spirit of the native. Basnight does have a mind that operates on two planes -- hence a moon in Gemini reading.
2. The explosion carried Lew to a place on the other side of the Sun. Deep autumn would then be November 23 to December 21th, our sign of Sagittarius.
get the Ashes back . . . next year
On page 236 the Ashes (Test Matches, cricket competitions between England and Australia) are "in progress." At some time previous to this conversation Mme. Eskimoff said England will regain the trophy "next year" provided they use the young bowler Bosanquet (next entry). Test Matches took place in (a) December 1901 to March 1902, Australia victorious; (b) May to August 1902, Australia again; (c) December 1903 to March 1904, England bringing back the Ashes and Bosanquet figuring as a key bowler.
If Mme. Eskimoff has foreseen aright, "next year" is 1904 and the time of the action is 1903. The conflict in dates is troubling: In a matter of weeks and a few pages, Lew just misses the 1900 Hurricane and gets information that definitely points to 1903. (And he proves to be a Gemini with an autumn birthday!) I don't think there is anything accidental—or negligible—about the discrepancies.
Another Ashes reference. Bernard Bosanquet invented the bosie (or googly), as described here, around 1900. A major factor in England's 2005 Ashes success was reverse swing, another type of delivery whose physical dynamics are poorly understood.
Check out the "Cricket in Against the Day article by Peter Vernon, which is an in-depth look at, well, cricket in Against the Day.
A somewhat derogatory term for a British person, commonly used in Australian English. Also Pommy or Pommie.
Hebrew letter Shin
Obviously a nod to the Vulcan greeting in Star Trek, with the distinctive hand sign and the phrase, "Live long and prosper." Perhaps also to the Jewish faith of Leonard Nimoy, who played Spock. See The Jewish origin of the Vulcan Salute
Pynchon placed one of these in Mason & Dixon, as well:
Dixon discovers "The Rabbi of Prague, headquarters of a Kabbalistick Faith, in Correspondence with the Elect Cohens of Paris, whose private Salute they now greet Dixon with, the Fingers spread two and two, and the Thumb held away from them likewise, said to represent the Hebrew letter Shin and to signify, 'Live long and prosper.'(M&D p.485)
Might there be a further connection between The Cohen of T.W.I.T., the "Cohens of Paris" and these backwoods Kabbalists?
Also, note the hand on the devil tarot card above.
Shin "also stands for the word Shaddai, a name for God. Because of this, a kohen (priest) forms the letter Shin with his hands as he recites the Priestly Blessing. In the mid 1960s, actor Leonard Nimoy used a single-handed version of this gesture to create the Vulcan Hand Salute for his character, Mr. Spock, on Star Trek."[7]
...and if we look back to the Devil tarot card we see the shin hand sign and the inverted pentagram. Thus through Eliphas Levi and then Coleman-Smith/Waite a connection is created between shin and the inverted pentagram. And then we can make connections with the Jeshimonians and the TWITsters.
This might be why "the cure grows right next to the cause" in Jeshimon. They are under the winged protection of God-the-Destroyer.
British term indicating complete ordinariness. Possible Entymology (Dog's Bollocks, British Or German Standard): Wiktionary
Page 238
Second Law of Thermodynamics
The law of entropy... "The entropy of an isolated system not in equilibrium will tend to increase over time, approaching a maximum value at equilibrium." (Rudolf Clausius) [9]
There's no such thing as a perfectly efficient engine, i.e., a box that does work by taking in heat from where there is lots of heat (e.g., combustion chamber) and throwing off heat where there is not much (exhaust pipe). Something always gets lost. Similarly, the transfer of money from where there is plenty (bank) to where there isn't much (Europe) is never perfectly efficient.
"He began then, bewilderingly, to talk about something called entropy. The word bothered him... But it was too technical for her. She did gather that there were two distinct kinds of this entropy. One having to do with heat engines, the other to do with communication... The two fields were entirely unconnected, except at one point: Maxwell's Demon. As the Demon sat and sorted his molecules into hot and cold, the system was said to lose entropy. But somehow the loss was offset by the information the Demon gained about what molecules were where... Entropy is a figure of speech, then, a metaphor. It connects the world of thermodynamics to the world of information flow." The Crying of Lot 49 (Pages 84 - 85)
morsus fundamento
Latin: A bite on the ass?
The meaning is that he wouldn't know metaphysics if it bit him in the ass. Like "octogenarihexation" ("86"-ing) in Vineland--the vulgar faux fancied up.
three-percent consols
British "consolidated" bonds, for many years the conservative investment par excellence. wikipedia
Page 239
Not mental as in "of the mind" but mental as in "mad". "You're mental, you are" is a common british playground taunt.
Colney Hatch
London lunatic asylum. Wikipedia
Out of the dust . . . beam of morning sunlight
I.e., sometimes your horse wins.
An encyclical is a letter circulated by the pope or other figure of high authority in a body of believers. A comprehensive Wikipedia article explains and adds a list of papal encyclicals. An encyclical usually takes its first 2 or 3 words as its title (Multi et Unus in this case).
Of course, the Vatican would strongly protest that McTaggart, an atheist, should send out an encyclical!
Seems to refer to a historical logician joke. explanation Professor McTaggart was, perhaps, the most famous philosopher who argued that Time did not exist as we seem to experience it. W.H. Hardy was a very famous Cambridge mathematician who knew all the famous philosophers in England.
John McTaggart Ellis (J. M. E.) McTaggart (1866-1925), British philosopher. He was born in London and educated at Clifton College, Bristol and Trinity College, Cambridge. He lectured Philosophy at Trinity College from 1897 to 1923. His brilliant commentaries and studies on Hegel's dialectic (1896), cosmology (1901) and logic (1910) were preliminaries to his own constructive system-building in Nature of Existence (3 vols., 1921-1927). In his 1908 essay "The Unreality of Time" he argued that our perception of time is an illusion (Cf page 412: dismissing . . . the existence of Time).
Godfrey Harold Hardy (1877-1947), English mathematician. He was a lecturer at Cambridge (1906-1919), professor at Oxford (1919-31) and Cambridge (1931-47). Concurrently with Wilhelm Weinberg developed Hardy-Weinberg law (1906) describing genetic distribution and dequilibrium in large populations. He was also known for contributions to complex analysis, Diophantine analysis, Fourier series, distribution of prime numbers, etc.
Multi et Unus
Many and One.
Is the grafitti in Cambridge another cricketing reference? Dukes are the balls used in England (cf. p236). Chucking (or bending the arm when bowling) is an emotive topic in cricket that arises from time to time. It first arose around 1900 [10]. In 2005 it caused administrators to change the rules of the game [11].
"Create More Dukes" has a second meaning, suggested by the odd choice of verb. Duke in Britain refers to the highest rank of nobility, and fittingly there are not many of them. At present only about a dozen people hold the title. Since sometime in the 1870s new dukes have been created (by decree of the monarch) only in the royal family. Most recently at the time of the action, Queen Victoria had promoted a run-of-the-mill marquess to the dukedom of Fife to set the stage for his marriage to one of her granddaughters. If some group of activists thought the nation needed to beef up its peerage, they might adopt the slogan found here as a graffito.
Here is a colorful summary of UK dukes today and through history, although it is unsound on coats of arms and such. This site has more names and fewer pictures, listing all the titles (from dukes to lowly barons) created since the year Dot.
the Laplacian, a relatively remote mathematicians' pub
A little Pynchonian joke? The Laplacian operator is a component of the Schrödinger equation, the basis of quantum mechanics. Quantum mechanics was famously rejected by Albert Einstein (many references on the net but see Stephen Hawking), known for his theories of relativity. Moreover, quantum mechanics deals with the very small and relativity with the very large (this is a simplification of course), so the Laplacian is indeed remote from relativity!
No such pub during my stay in Cambridge (1998-2000). Also not today, according to this list.
Obviously more than a little joke. Refers to Pierre-Simon, marquis de Laplace (1749-1827)Wikipedia Entry, aka "the French Newton", probably the greatest mathematician and astronomer of his time. Most of the scientific principles derived from his findings are explored in AtD (from lumineferous ether to the existence of black holes). Laplace was also instrumental in the advancement of the science of probabilities.
The ever quotable Laplace, much loved by atheists worldwide, famously replied to Napoléon, when he asked why there was no mention of God in his treatise on astronomy: "Sir, there was no need for that hypothesis". Also responsible for what is known as the Laplacian principle: "The weight of evidence for an extraordinary claim must be proportioned to its strangeness."
The literal translation of Laplace is "The Place".
The connection of the Laplace operator to the Schrödinger equation and quantum mechanics is a bit of a stretch -- the Laplace operator is ubiquitous, appearing in the heat equation, the wave equation and the Navier-Stokes equations.
Page 240
Worse than Gordon at Khartoum
Refers to Charles George Gordon, British Major-General, whose attempted defense of Khartoum versus Arabi rebels in 1884-85 ended with his beheading. Wikipedia cf. Basil Dearden's 1966 film Khartoum, in which the role of Gordon is played by Charlton Heston.
Page 241
"You recognize him?"
As, presumably, Webb.
How can that be? Webb is dead, there's nothing to suggest he went to England, the costume is not right for him, and—most tellingly—his medium is dynamite, not phosgene.
Who might Lew recognize in the photo? The "suspects" are Neville, Nigel, the Grand Cohen, Dr. Coombs De Bottle, Clive Crouchmas and Professor Renfrew. If Prof. Werfner looks much like Prof. Renfrew, he goes on the list too. If the "Gentleman Bomber" could possibly be female, add Yashmeen and Mme. Eskimoff. We haven't met anyone else (except members of the Icosadyad, who don't have faces).
Suppose we rule out the ladies and Werfner. Neville or Nigel wouldn't be able to hide their identities with a suit of white flannels. Renfrew is sitting right there when Lew sees the picture, but Lew's reaction (his stomach sinks) does not seem Lew-like if it's Renfrew he has recognized, plus Renfrew himself wants to meet the Bomber. That leaves the Cohen, De Bottle and Crouchmas.
Would Lew experience dread on spotting Crouchmas? He doesn't know much about C.C. at this point, so it isn't clear why he would suppress that recognition.
Seeing the Cohen might lead to this gastric reaction: Lew might think he's on the fringe of an anarchist group again (and look where it got him the last time). The Cohen stays on the list.
Dr. De Bottle not only follows cricket but bets on it; he speaks almost with reverence about phosgene; he knows a nonobvious fact about the bombs; and he dresses like a gentleman. None of these points applies to the Cohen. And recognizing De Bottle would give Lew that sinking feeling because D.B. is purportedly fighting against bombers on behalf of the government. De Bottle goes to the top of the short list.
Alternately, there's no clear answer and not enough clues (especially considering the role of time, forces beyond anyone's control, double agents, etc.). This Gentleman Bomber can be any person from Lew's past or a deja vu from the future. The G. Bomber seems to be England's answer to the Kieselguhr Kid, a nebulous personality working against the forces of history. The important thing about this situation is not the Bomber's identity, but the fact that Lew is being thrown into an assignment much like his last one in America (and we know how that ended...) He's obviously not very happy about it, and not inclined to tell anyone what he knows, or might know.
For what it's worth, my take was also Webb, especially in the context of all the bilocation business. It isn't "Webb" but evil alter-land "Webb"! (no dig on ya'll Brit folk intended, although the " stay at Cambridge" bit was just wonderful :)
--There is a suspect that was left off that list, who occurred to me before any of the others--Lew himself. If we're dealing with bilocation, doubles, and the possibility that 'our' Lew was brought here via the explosion in the creek bed, couldn't the G.B.H. be this world's Lew? Recall that just prior to the explosion, Lew had resolved to choose a side in the Anarchist/plutocrat battle, and had come down on the side of the people. Did this world's Lew make the same choice, somehow ending up in Britain... Also conspicuous is that it is Renfrew showing him the photo with a sly expression, Renfrew who's own double, Werfner, is his very own nemesis? That's how I read it anyway. But, much like the Kieselguhr Kid, we the readers never actually know the identity of this renegade bomb-lobber.
A bosie from a beamer
More cricket! A bosie is now more commonly known as a googly (cf. p237). A beamer is a full-pitched delivery that reaches the batsman above waist height.
Page 242
The northern hemisphere
German: uncanny, sinister.
1. From The Secret Teachings of All Ages by Manly P. Hall (1928)
2. The Oxford English Dictionary. 2nd ed. 1989
3. Def.3. The Oxford English Dictionary. 2nd ed. 1989.
4. Def.1. The Oxford English Dictionary. 2nd ed. 1989.
Annotation Index
Part One:
The Light Over the Ranges
Part Two:
Iceland Spar
Part Three:
Part Four:
Against the Day
Part Five:
Rue du Départ
Personal tools |
b4830bddbf340d11 |
Forgot your password?
Comment Re:Clickbait (Score 0) 130
Hmm, theoretically impossible? I guess, *in principle*, any user could always just reformat and install Windows XP, but granting that at least *some* system components can be trusted, there is the notion of http://en.wikipedia.org/wiki/Proof-carrying_code/ which, although not commonly implemented due to the technology not being there yet for widespread adoption, could conceivably be implemented as a system wide policy.
The idea is that each piece of code contains within it a proof of its compliance with some formally specified security policy defined by the system, which the system verifies before the code is allowed to execute. The result is, as long as you can trust the security policy and things like the program loader, you can trust everything that executes, regardless of origin.
While writing this, it occurs to me that maybe the issue with even this system is no security policy could simultaneously allow all nonmalicious software features while excluding all malicious features, even in principle. A proof of this isnt so obvious to me though
Comment Re:Open set it is! (Score 1) 248
To be pedantic, we still cant conclude the product + 1 is prime, only that it is a contradiction that it is divisible by no prime (which is all we need anyway). The GP is correct in that a proof more similar to Euclid's original is given by considering an arbitrary finite set P of primes, letting N be the product of the P plus 1, and then concluding that a prime divisor of N must not be in P.
Comment Re:mutable state (Score 0) 404
It is important to remember that while a naive approach to functional programming (that is, using the same data structures and idioms as for imperative languages), will indeed be inefficient due to enforced persistence, much of the loss can be mitigated by using better data structures. For instance, Data.Sequence of Haskell supports access and modification time logrithmic in the closest distance to one end of the sequence, and does so in a thread safe (as always) way.
Even in the case where we demand mutable data structures because nothing else is acceptable, we can control the access in a purely functional way using monads (a very cool concept which arent as scary as they first might seem). See for instance, the haskell State monad and mutable data structures.
Comment Judge is walking a thin line over a slippery slope (Score -1, Troll) 140
A judge who dismisses a case on grounds of 'public interest' and not rule of law is overstepping his authority. As broken as our patent system is, much worse is a judiciary which disregards the checks and balances established for it by our Constitution. Perhaps Apple and Motorola are being childish, but they are acting in a manner they believe benefits their stockholders the most within the confines of the law, which is the extent of the court's authority.
Granted, I havn't read the case materials, and the judge may have a more legitimate legal basis to cancel the jury trial.
Comment Re:Common Sense, anyone? (Score 1) 788
Well the lowest risk investment you can make these days is in a US Treasury Bond, essentially investing in US debt. One could argue that if the government is trying to create jobs by spending money, giving the government money will lead to job creation. Of course, this depends both on the government being successful at job creation by spending money, and that rich people would actually invest in treasury bonds. The interest rate on treasury bonds is so low right now its actually risky to buy because if the interest rate then increases, the price of the bond you own drops signifigantly (http://en.wikipedia.org/wiki/Interest_rate_risk).
Much more likely for large investments is diversified stocks, possibly managed inside a mutual fund. Diversification mitigates risk of temporal losses of investment value, but the stock market has a much higher historical (average) rate of return than bonds. Stocks give money to companies to spend on their business, which more directly contribute to job creation.
Even if you let your money collect dust in a low-risk savings account in a bank somewhere, the bank's business is to reinvest your money in the above. There are also other investment opportunities, but they all involve your money ultimately getting to companies or the government (but to be fair only mostly US companies and governments).
Comment Re:Have to share this - holy crap! mod parent up (Score 1) 626
To be fair, a 'proof' that is directly experienced and scientific proof are both wholly different from a mathematical proof, which is simply a sequence of deductive steps originating from a stipulated set of axioms and definitions. Given a rational person who also happens to be a creationist, even he would agree that assuming
• A implies B
• A
then B is provable in this system. This is in contrast to scientific inquiry, where there are no axioms. Instead, there are measurements and observations, and hypotheses are created which attempt to explain the measurements and observations. If a hypothesis is successful in its explanatory or predictive power with respect to further measurements and observations, eventually we may call it a theorem. At no point did we prove the hypothesis in the mathematical sense, and in fact if there came along a measurement or observation that contradicted the theorem, it would have to be revised or discarded.
Demanding to see more data before accepting a scientific theory is not an unscientific thing to do, as long as one does it honestly and with intellectual integrity. Obviously the gotcha arguments thrown around by many creationists concerning the inability to directly collect data from before the dawn of man doesn't really fulfill the spirit of science...
Comment Re:Well, teaches kids a valuable lesson (Score 1) 330
The sad thing is that the "Generation Facebook" is not going to go away. Kids are being continually conditioned to accept breaches in their privacy by the facebook model. Around every 2 years or so fb rolls out some opt-out Cool New Feature which causes an initial uproar about its privacy implications. Maybe fb makes a statement or adds some specific privacy features, but eventually people forget and gradually care less and less about the violation of their privacy. Each new feature is only incrementally worse than the previous, never enough to cause enough of an uproar to have it removed (except in a few cases), and so this systematic invasion of privacy continues.
I'm afraid this level of blatant trespassing as you say is only going to get more accepted and mainstream as an entire generation has its opinions on privacy eroded. This gradual desensitization to initially offensive policies is coincidentally the same way the Holocaust started
Comment Re:What I never understood about the uncertainty p (Score 1) 112
Sorry I should have been clear. All I meant was that we wouldnt "see" the interference pattern with just a single electron since it just excites a single atom (talking about wavefunction collapse when it hits the screen). But thats exactly right the electron still interferes with itself and the probability distribution of where we see it is the same as the interference pattern.
Comment Re:What I never understood about the uncertainty p (Score 2) 112
I think I can safely say that nobody understands quantum mechanics.
Richard Feynman, in The Character of Physical Law (1965)
That said, I think I can attempt to clarify some of your misunderstandings from my own understanding. In fact someone set me straight if I have any issues of my own :)
The entire notion of a point particle is essentially a classical approximation (as far as geometry goes). In fact, all the spatial information that can be known (ie not completely transparent to the rest of the universe) about a particle is completely contained in its wave function, complex valued defined at every point in space. But the wave function in time must satisfy the Schrödinger equation, and it has been shown by people smarter than I that wave function solutions *must* satisfy the uncertainty principle.
Its easiest to consider what this means in one dimension. Solutions of the Schrödinger Equation are linear combinations of sinusoidal functions of all wavelengths and velocities (with the solution for a particular particle determined like with any differential equation by the spatial and temporal boundary conditions). This is immediately consistent with the wave description of light and matter, as a sinusoidal function has a definite velocity but its position is not defined at all (it looks like a wave :P). So how then can we get a localized particle, like those we apparently observe enough to create an entire classical theory around? Well it turns out that taking linear combinations of waves of differing velocities causes local areas of destructive and constructive interference, and one can mathematically construct what's known as a wave packet. Btw, the time evolution of the wave packet in the picture on wikipedia is incorrect for solutions to the Schrödinger Equation: particle wave packets necessarily disperse over time depending on the represented wave velocities (don't quote me on that). This means the range of represented wave velocities actually has physical significance. Anyway there's a limit to how localized a wave packet can get, called a Gaussian wave packet. To achieve this limit, one has to sum over essentially every possible wave velocity.
So solutions of the Schrödinger Equation can be something with no localization at all and a perfectly well defined velocity, like a sinusoidal function, or with a very acute (but not perfect) localization achieved by an almost infinite range of velocities of component waves. In fact there is a very simple inequality expressing the relationship between the smallness of the localization to the range of velocities (momenta, actually)...
So all that's not that bad. The real strangeness of QM comes with what observation does to the wave function of a particle. Somehow, the act of observation (something I am not knowledgeable enough to define, but examples of which are hitting it with a photon or having it excite the screen in the double slit experiment, or even covering up a slit thus knowing it must go through the other) "collapses" the wave function of a particle back into its most localized form. The probability distribution of the center of the new localized form is given by the product of the wave function with its complex conjugate just before the observation.
The interference pattern corresponds to the probability distribution of particles when they reach the screen behind the double slit. If I fired only one particle through the double slit, it would cause a single photon (probably) to be emitted from the screen, with its location determined by the probability distribution. We can see an interference pattern because we are firing a beam of particles, not just one at a time. The kicker from the experiment is if we observe the interference pattern (say by collecting billions of data points from electrons fired one at a time), information about which slit each electron went through (in particular) is transparent to the rest of the universe; it cant be determined from say where that electron struck the screen. This seems to be where your misunderstanding lies, at least in part. If we had observed which slit the electron went though (by covering up a slit, or by shining light on it and doing what scientists do), then the interference pattern would have disappeared.
Slashdot Top Deals
|
ffcf71d430d40786 | There are plenty of free particles — particles outside any square well —in the universe, and quantum physics has something to say about them. The discussion starts with the Schrödinger equation:
Say you’re dealing with a free particle whose general potential, V(x) = 0. In that case, you’d have the following equation:
And you can rewrite this as
where the wave number, k, is
You can write the general solution to this Schrödinger equation as
If you add time-dependence to the equation, you get this time-dependent wave function:
That’s a solution to the Schrödinger equation, but it turns out to be unphysical. To see this, note that for either term in the equation, you can’t normalize the probability density,
as long as A and B aren’t both equal to zero.
What’s going on here? The probability density for the position of the particle is uniform throughout all x! In other words, you can’t pin down the particle at all.
This is a result of the form of the time-dependent wave function, which uses an exact value for the wave number,
So what that equation says is that you know E and p exactly. And if you know p and E exactly, that causes a large uncertainty in x and t — in fact, x and t are completely uncertain. That doesn’t correspond to physical reality.
For that matter, the wave function
Marylouise, can you format the EQ above as a gif? Thanks, Alexa.
as it stands, isn’t something you can normalize. Trying to normalize the first term, for example, gives you this integral:
EQ needs to be a gif.
And for the first term of
EQ needs to be a gif.
And the same is true of the second term in |
95e92c807b8cc912 | Covalent bond
From Wikipedia, the free encyclopedia
(Redirected from Covalent bonds)
Jump to: navigation, search
A covalent bond is a chemical bond that involves the sharing of electron pairs between atoms. These electron pairs are known as shared pairs or bonding pairs and the stable balance of attractive and repulsive forces between atoms when they share electrons is known as covalent bonding.[1][better source needed] For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full outer shell, corresponding to a stable electronic configuration.
In the molecule H
Early concepts in covalent bonding arose from this kind of image of the molecule of methane. Covalent bonding is implied in the Lewis structure by indicating electrons shared between atoms.
Types of covalent bonds[edit]
Atomic orbitals (except for s orbitals) have specific directional properties leading to different types of covalent bonds. Sigma bonds (σ bonds) are the strongest covalent bonds and are due to head-on overlapping of orbitals on two different atoms. A single bond is usually a sigma bond. Pi bonds are weaker and are due to lateral overlap between p (or d) orbitals. A double bond between two given atoms consists of one sigma and one pi bond, and a triple bond is one sigma and two pi bonds.
Covalent bonds are also affected by the electronegativity of the connected atoms which determines the chemical polarity of the bond. Two atoms with equal electronegativity will make nonpolar covalent bonds such as H–H. An unequal relationship creates a polar covalent bond such as with H−Cl.
Covalent structures[edit]
One- and three-electron bonds[edit]
Main article: Resonance (chemistry)
There are situations whereby a single Lewis structure is insufficient to explain the electron configuration in a molecule, hence a superposition of structures are needed. The same two atoms in such molecules can be bonded differently in different structures (a single bond in one, a double bond in another, or even none at all), resulting in a non-integer bond order. The nitrate ion is one such example with three equivalent structures. The bond between the nitrogen and each oxygen is a double bond in one structure and a single bond in the other two, so that the average bond order for each N-O interaction is (2 + 1 + 1)/3 = 4/3.
Canonical resonance structures for the nitrate ion
Main article: Aromaticity
In organic chemistry, when a molecule with a planar ring obeys Hückel's rule, where the number of π electrons fit the formula 4n + 2 (where n is an integer), it attains extra stability and symmetry. In benzene, the prototypical aromatic compound, there are 6 π bonding electrons (n = 1, 4n + 2 = 6). These occupy three delocalized π molecular orbitals (molecular orbital theory) or form conjugate π bonds in two resonance structures that linearly combine (valence bond theory), creating a regular hexagon exhibiting a greater stabilization than the hypothetical 1,3,5-cyclohexatriene.
Main article: Hypervalent molecule
Certain molecules such as xenon difluoride and sulfur hexafluoride have higher co-ordination numbers than would be possible due to strictly covalent bonding according to the octet rule. This is explained by the three-center four-electron bond ("3c–4e") model in molecular orbital theory and ionic-covalent resonance in valence bond theory.
Main article: Electron deficiency
Quantum mechanical description[edit]
After the development of quantum mechanics, two basic theories were proposed to provide a quantum description of chemical bonding: valence bond (VB) theory and molecular orbital (MO) theory.
Valence bond theory[edit]
Main article: Valence bond theory
2. The spins of the electrons have to be opposed.
His last three rules were new:
4. The electron-exchange terms for the bond involve only one wave function from each atom.
Molecular orbital theory[edit]
Molecular orbitals were first introduced by Friedrich Hund[12][13] and Robert S. Mulliken[14][15] in 1927 and 1928.[16][17] The linear combination of atomic orbitals or "LCAO" approximation for molecular orbitals was introduced in 1929 by Sir John Lennard-Jones.[18] Linear combinations of atomic orbitals (LCAO) can be used to estimate the molecular orbitals that are formed upon bonding between the molecule's constituent atoms. Similar to an atomic orbital, a Schrödinger equation, which describes the behavior of an electron, can be constructed for a molecular orbital as well. Linear combinations of atomic orbitals, or the sums and differences of the atomic wavefunctions, provide approximate solutions to the Hartree–Fock equations which correspond to the independent-particle approximation of the molecular Schrödinger equation.
Bonding MOs:
Antibonding MOs:
Nonbonding MOs:
The two theories differ in the order that the electron configuration of the molecule is built up.[19] For valence bond theory, the atomic hybrid orbitals are filled first to produce a full valence configuration of bonding pairs and lone pairs. If several such configurations exist, a weighted superposition of these configurations is then applied. In contrast, for molecular orbital theory a weighted superposition of atomic orbitals is performed first, followed by the filling of the resulting molecular orbitals by the Aufbau principle.
Either theory has its advantages and uses. As valence bond theory builds the molecular wavefunction out of localized bonds, it is more suited for the calculation of bond energies and the understanding of reaction mechanisms. In particular, valence bond theory correctly predicts the dissociation of homonuclear diatomic molecules into separate atoms, while simple molecular orbital theory predicts dissociation into a mixture of atoms and ions. Molecular orbital theory, with delocalized orbitals that obey its symmetry, is more suited for the calculation of ionization energies and the understanding of spectral absorption bands. Molecular orbitals are orthogonal, which significantly increases feasibility and speed of computer calculations compared to nonorthogonal valence bond orbitals.
Although the wavefunctions generated by both theories do not agree and do not match the stabilization energy by experiment, they can be corrected by configuration interaction.[19] This is done by combining the valence bond covalent function with the functions describing all possible ionic configurations or by combining the molecular orbital ground state function with the functions describing all possible excited states using unoccupied orbitals. It can then be seen that the simple molecular orbital approach gives too much weight to the ionic structures while the simple valence bond approach gives too little. This can also be described as saying that the molecular orbital approach neglects electron correlation while the valence bond approach overestimates it.[19]
See also[edit]
1. ^ Campbell, Neil A.; Brad Williamson; Robin J. Heyden (2006). Biology: Exploring Life. Boston, Massachusetts: Pearson Prentice Hall. ISBN 0-13-250882-6. Retrieved 2012-02-05. [better source needed]
3. ^ Gary L. Miessler; Donald Arthur Tarr (2004). Inorganic chemistry. Prentice Hall. ISBN 0-13-035471-6.
5. ^ "Chemical Bonds". Retrieved 2013-06-09.
6. ^ Langmuir, Irving (1919-06-01). "The Arrangement of Electrons in Atoms and Molecules". Journal of the American Chemical Society 41 (6): 868–934. doi:10.1021/ja02227a002.
7. ^ Lewis, Gilbert N. (1916-04-01). "The atom and the molecule". Journal of the American Chemical Society 38 (4): 762–785. doi:10.1021/ja02261a002.
8. ^ W. Heitler and F. London, Zeitschrift für Physik, vol. 44, p. 455 (1927). English translation in Hettema, H. (2000). Quantum chemistry: classic scientific papers. World Scientific. pp. 140–. ISBN 978-981-02-2771-5. Retrieved 2012-02-05.
9. ^ Stranks, D. R.; M. L. Heffernan, K. C. Lee Dow, P. T. McTigue, G. R. A. Withers (1970). Chemistry: A structural view. Carlton, Victoria: Melbourne University Press. p. 184. ISBN 0-522-83988-6.
17. ^ Robert S. Mulliken's Nobel Lecture, Science, 157, no. 3785, 13 – 24, (1967). Available on-line at: .
19. ^ a b c P.W. Atkins (1974). Quanta: A Handbook of Concepts. Oxford University Press. pp. 147–148. ISBN 0-19-855493-1.
External links[edit] |
fcaa43dcfcc679d0 | Varieties of Emergence
David J. Chalmers
Department of Philosophy
University of Arizona
Tucson, AZ 85721.
[[Written for the Templeton Foundation workshop on emergence in Granada, August 2002. Given the informal nature of the workshop, I haven't been especially careful with citations and such, but I should note up front that not much of what follows is fundamentally original with me. I hope that nevertheless there is something useful at least in the way I have put things together.]]
Two concepts of emergence
The term "emergence" has the potential to cause no end of confusion in science and philosophy, as it is used to express two quite different concepts. We can label these concepts strong emergence and weak emergence. Both of these concepts are extremely important, but it is vital to keep them separate. As far as I can tell, the papers for the Granada workshop are about evenly divided between papers on strong emergence and papers on weak emergence, so there is a danger of miscommunication here.
We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain. Strong emergence is the notion of emergence that is most common in philosophical discussion of emergence, and is the notion invoked by the "British emergentists" of the 1920s.
We can say that a high-level phenomenon is weakly emergent with respect to a low-level domain when truths concerning that phenomenon are unexpected given the principles governing the low-level domain. Weak emergence is the notion of emergence that is most common in recent scientific discussion of emergence, and is the notion that is typically invoked by proponents of emergence in complex systems theory. (See Bedau 1997 for a nice discussion of the notion of weak emergence.)
These definitions of strong and weak emergence are first approximations, which might later be refined. But they are enough to exhibit the key differences between the notion. As just defined, cases of strong emergence will likely also be cases of weak emergence (although this depends on just how "unexpected" is understood). But vases of weak emergence need not be cases of strong emergence. It often happens that a high-level phenomenon is unexpected given principles of a low-level domain, but is nevertheless deducible in principle from truths concerning that domain. The emergence of high-level patterns in cellular automata — a paradigm of emergence in recent complex systems theory — provides a clear example. If one is given only the basic rules governing a cellular automaton, then the formation of complex high-level patterns (such as gliders) may well be unexpected, so these patterns are weakly emergent. But the formation of these patterns is straightforwardly deducible from the rules (and initial conditions), so these patterns are not strongly emergent. Of course, to deduce the facts about the patterns in this case may require a fair amount of calculation, which is why their formation was not obvious to start with. Nevertheless, upon examination these high-level facts are a straightforward consequence of low-level facts. So this is a clear case of weak emergence without strong emergence.
Strong emergence has much more radical consequences than weak emergence. If there are phenomena that are strongly emergent with respect to the domain of physics, then our conception of nature needs to be expanded to accommodate them. That is, if there are phenomena whose existence is not deducible from the facts about the exact distribution of particles and fields throughout space and tie (along with the laws of physics), then this suggests that new fundamental laws of nature are needed to explain these phenomena.
The existence of phenomena that are merely weakly emergent with respect to the domain of physics does not have such radical consequences. The existence of unexpected phenomena in complex biological systems, for example, does not on its own threaten the completeness of the catalog of fundamental laws found in physics. As long as the existence of these phenomena is deducible in principle from a physical specification of the world (as in the case of the cellular automaton), then no new fundamental laws or properties are needed: everything will still be a consequence of physics. So if we want to use emergence to draw conclusions about the structure of nature at the most fundamental level, it is not weak emergence but strong emergence that is relevant.
Of course weak emergence may still have important consequences for our understanding of nature. Even if weakly emergent phenomena do not require the introduction of new fundamental laws, they may still require in many cases the introduction of further levels of explanation above the physical level, in order to make these phenomena maximally comprehensible to us. Further, by showing how a simple starting point can have unexpected consequences, the existence of weakly emergent phenomena can be seen as showing that a simple physicalist picture of the world need not be overly reductionist, but rather can accommodate all sorts of unexpected richness at higher levels.
In a way, the philosophical morals of strong emergence and weak emergence are diametrically opposed. Strong emergence, if it exists, can be used to reject the physicalist picture of the world as fundamentally incomplete. By contrast, weak emergence can be used to support the physicalist picture of the world, by showing how all sorts of phenomena that might seem novel and irreducible at first sight can nevertheless be grounded in underlying simple laws.
In what follows, I will say a little more about both strong and weak emergence.
Strong emergence
We have seen that strong emergence, if it exists, has radical consequences. The question that immediately arises, then, is: Are there strongly emergent phenomena?
My own view is that the answer to this question is yes. I think there is exactly one clear case of a strongly emergent phenomenon, and that is the phenomenon of consciousness. We can say that a system is conscious when there is something it is like to be that system: that is, when there is something it feels like from the system's own perspective. It is a key fact about nature that it contains conscious systems; I am one such. And there is reason to believe that the facts about consciousness are not deducible from any number of physical facts.
I have argued this case at length elsewhere (Chalmers 1996, 2002) and will not repeat the case here. But I will mention two well-known avenues of support. First, it seems that a colorblind scientist given complete physical knowledge about brains could nevertheless not deduce what it is like to have a conscious experience of red. Second, it seems logically coherent in principle that there could be a world physically identical to this one, but lacking consciousness entirely, or containing conscious experiences different from our own. If these claims are correct, it appears to follow that facts about consciousness are not deducible from physical facts alone.
If this is so, then what follows? I think that even if consciousness is not deducible from physical facts, states of consciousness are still systematically correlated with physical states. In particular, it remains plausible that in the actual world, the state of a person's brain determines their state of consciousness, in the sense that duplicating the brain state will cause the conscious state to be duplicated too. That is, consciousness still supervenes on the physical domain. But importantly, this supervenience holds only with the strength of laws of nature (in the philosophical jargon, it is natural or nomological supervenience). In our world, it seems to be a matter of law that duplicating physical states will duplicate consciousness; but in other worlds with different laws, a system with the same physical state as me might have no consciousness at all. This suggests that the lawful connection between physical processes and consciousness is not itself derivable from physical laws, but instead involves further basic laws of its own. These are what we might call fundamental psychophysical laws.
I think this provides a good general model for strong emergence. We can think of paradigm strongly emergent phenomena as being systematically determined by low-level facts without being deducible from those facts. In philosophical language, they are naturally but not logically supervenient on low-level facts. In any case like this, fundamental physical laws need to be supplemented with further fundamental laws to ground the connection between low-level properties and high-level properties. Something like this seems to be what the British emergentist C.D. Broad had in mind, when he invoked the need for "trans-ordinal laws" connecting different levels of nature.
Are there other cases of strong emergence, besides consciousness? I think that there are no other clear cases, and that there are fairly good reasons to think that there are no other cases. Elsewhere (Chalmers 1996; Chalmers and Jackson 2001) I have argued that given a complete catalog of physical facts about the world, supplemented by a complete catalog of facts about consciousness, a Laplacean superbeing could in principle deduce all the high-level facts about the world, including the high-level facts about chemistry, biology, economics, and so on. If this is right, then phenomena in this domain may be weakly emergent from the physical, but they are not strongly emergent (or if they are strongly emergent, this strong emergence will derive wholly from a dependence on the strongly emergent phenomena of consciousness).
One might wonder about cases in which high-level laws, say in chemistry, are not obviously derivable from low-level laws of physics. How can I know now that this is not the case? Here, one can reply by saying that even if the high-level laws are not deducible from the low-level laws, it remains plausible that they are deducible (or nearly so) from the low-level facts. For example, if one knows the complete distribution of atoms in space and time, it is plausible that one can deduce from there the complete distribution of chemical molecules, whether or not the laws governing molecules are immediately deducible from the laws governing atoms. So any emergence here is weaker than the sort of emergence that I suggest is present in the case of consciousness.
Still, this suggests the possibility of an intermediate but still "radical" sort of emergence, in which high-level facts and laws are not deducible from low-level laws (combined with initial conditions). If this intermediate sort of emergence exists, then if our Laplacean superbeing is armed only with low-level laws and initial conditions (as opposed to all the low-level facts throughout space and time), it will be unable to deduce the facts about some high-level phenomena. This will presumably go along with a failure to be able to deduce even all the low-level facts from low-level laws plus initial conditions (if the low-level facts were derivable, the demon could deduce the high-level facts from there). So this sort of emergence entails a sort of incompleteness of physical laws even in characterizing the systematic evolution of low-level processes.
The best way of thinking of this sort of possibility is as involving a sort of downward causation. It requires basic principles saying that when certain high-level configurations occur, certain consequences will follow. (These are what McLaughlin 1993 calls configurational laws.) These consequences will themselves either be cast in low-level terms, or will be cast in high-level terms that put strong constraints on low-level facts. Either way, low--level laws will be incomplete as a guide to both the low-level and the high-level evolution of processes in the world.
(In such a case, one might respond by introducing new, highly complex low-level laws to govern evolution in these special configurations, allowing low-level laws to be complete once again. But the point of this sort of emergence will still remain: it will just have to be rephrased, by saying that non-configurational low-level laws are an incomplete guide to the evolution of processes. See Meehl and Sellars 1956 for related ideas here.)
I don't think there is anything incoherent about the idea of this sort of downward causation. (Jaegwon Kim [e.g. Kim 1992, 1999] argues against downward causation, but I'm not sure to what extent we disagree — something to discuss at the workshop.) I don't know whether there are any examples of it in the actual world, however. While it's certainly true that we can't currently deduce all high-level facts and laws from low-level laws plus initial conditions, I don't know of any compelling evidence for high-level facts and laws (outside the case of consciousness) that are not deducible in principle. Others may know more about this than me, however.
Perhaps the most interesting potential case of downward causation is in the case of quantum mechanics, at least on certain "collapse" interpretations thereof. On these interpretations, there are two principles governing the evolution of the quantum wavefunction: the linear Schrödinger equation, which governs the standard case, and a nonlinear measurement postulate, which governs special cases of "measurement". In these cases, the wavefunction is held to undergo a sort of "quantum jump" quite unlike the usual case. A key issue is that no-one knows just what the criteria for a "measurement" is; but it is clear that for this interpretation to work, measurements must involve certain highly specific criteria, most likely at a high-level. If so, then we can see the measurement postulate as itself a sort of configurational law, involving downward causation. Of course in this case the configurational law is in effect already built into involves emergent behavior. Both of these can be seen as "strong" varieties of emergence in that they involve in-principle nondeducibility and novel fundamental laws. But they are quite different in character. If I am right about consciousness, then it is a case of an emergent quality, while if the relevant interpretations of quantum mechanics are correct, then it is more like a case of emergent behavior.
One can in principle have one sort of radical emergence without the other. If one has emergent qualities without emergent behavior, one has an "epiphenomenalist" picture on which there is a new fundamental quality that plays no causal role with respect to the lower level. If one has emergent behavior without emergent qualities, one has a picture of the world on which the only fundamental properties are physical, but on which their evolution is governed in part by high-level configurational laws.
One might also in principle have both emergent qualities and emergent causation together. If so, one has a picture on which a new fundamental quality is itself involved in laws of "downward causation" with respect to low-level processes. This last option can be illustrated by combining the cases of consciousness and quantum mechanics discussed above, as in the familiar interpretations of quantum mechanics according to which it is consciousness itself that is responsible for wavefunction collapse. On this picture, the emergent quality of consciousness is not epiphenomenal, but plays a crucial causal role.
My own view is that there is just one sort of emergent quality (relative to the physical domain), namely consciousness. I don't know whether there is any emergent causation, but it seems to me that if there is any emergent causation, quantum mechanics is the most likely locus for it. If both sorts of emergence exist, it is natural to examine the possibility of a close connection between them, perhaps along the lines mentioned in the last paragraph. For now, however, I think the question remains wide open.
Weak emergence
Weak emergence does not yield the same sort of radical metaphysical expansion in our conception of the world as strong emergence, but it is no less interesting for that. I think it is vital for understanding all sorts of phenomena in nature, and in particular to understanding biological, cognitive, and social phenomena. Others can address those issues better than I can, however. Instead, I'll conclude by attaching a something I wrote a number of years ago (as a graduate student in 1990) but never published. This was in effect a meditation on clarifying and refining the notion of weak emergence, as it applies to a number of familiar examples.
Emergence is a tricky concept. It's easy to slide it down a slippery slope, and turn it into something implausible and easily dismissable. But it's not easy to delineate the interesting middle ground in between. Two unsatisfactory definitions of emergence, at either end of the spectrum:
(1) Emergence as "inexplicable" and "magical". This would cover high-level properties of a system that are simply not deducible from its low-level properties, no matter how sophisticated the deduction. There is little evidence for this sort of emergence, except perhaps, in the difficult case of consciousness, but let's leave that aside for now. All material properties seem to follow from low-level physical properties. This is not usually the sort of "emergence" intended by people who invoke the notion in contemporary scientific discussions, but it is near enough to the neighborhood that it often leads to confusion.
(2) Emergence as the existence of properties of a system that are not possessed by any of its parts. This, of course, is so ubiquitous a phenomenon that it's not deeply interesting. Under this definition, file cabinets and decks of cards (not to mention XOR gates) have plenty of emergent properties — so this is surely not what we mean.
The challenge, then, is to delineate a concept of emergence that falls between the overly radical (1) and the overly general (2). After all, serious people do like do use the term, and they think they mean something interesting by it. It probably will help to focus on a few core examples of "emergence":
(A) The game of Life: High-level patterns and structure emerge from simple low-level rules.
(B) Connectionist networks: High-level "cognitive" behaviour emerges from simple interactions between dumb threshold logic units.
(C) The operating system (Hofstadter's example): The fact that overloading occurs just around when there are 35 users on the system seems to be an emergent property of the system.
(D) Evolution: Intelligence and many other interesting properties emerge over the course of evolution by genetic recombination, mutation and natural selection.
Note that in all these cases, the "emergent" properties are in fact deducible (perhaps with great difficulty) from the low-level properties (perhaps in conjunction with knowledge of initial conditions), so a more sophisticated concept than (1) is required. Another stab at a definition might be:
(3) Emergent = "deducible but not reducible". Biological and psychological laws and properties are frequently said not to be reducible to physical laws and properties. For many reasons, not the least being that the high-level laws/properties in question might be found associated with all kinds of different physical laws/properties as substrates. (A universe without protons and electrons might nevertheless include learning and memory.)
There are some problems with this definition, though. Firstly, it's not clear what is gained by trying to explicate emergence in terms of the almost-equally-murky concept of "reduction". Secondly, it seems to let in some not-paradigmatically-emergent phenomena, and it's not clear how some emergent phenomena like (A) or (C) would fit this definition. I think that (3) picks out a very interesting class, but it's not quite the class we're after. It's on the right track, though, I think.
The notion of reduction is intimately tied to the ease of understanding one level in terms of another. Emergent properties are usually properties that are more easily understood in their own right than in terms of properties at a lower level. This suggests an important observation: Emergence is a psychological property. It is not a metaphysical absolute. Properties are classed as "emergent" based at least in part on (1) the interestingness to a given observer of the high-level property at hand; and (2) the difficulty of an observer's deducing the high-level property from low-level properties. The properties of XOR are an obvious consequence of the properties of its parts. Emergent properties aren't. We might as well give this a number:
(4) Emergent high-level properties are interesting, non-obvious consequences of low-level properties.
This still can't be the full story, though. Every high-level physical property is a consequence of low-level properties, usually non-obviously. It feels unsatisfactory, for instance, to say that computations performed by a COBOL program are an emergent property relative to the low-level circuit operations — at least this feels much less "emergent" than a connectionist network. So something is missing. The trouble seems to lie with the complex, kludgy organization of the COBOL circuits. The low-level stuff may be simple enough, but all the complexity of the high-level behaviour is due to the complex structure that is given to the low-level mechanisms (by programming). Whereas in the case of connectionism or the game of life it feels that we have simplicity in both low-level mechanisms and their organization. So in those cases, we have much more of a "something for nothing" feel. Let's try for another number:
(5) Emergence is the phenomenon wherein complex, interesting high-level function is produced as a result of combining simple low-level mechanisms in simple ways.
I think this is much closer to a good definition of emergence. Note that COBOL programs, and many biological systems, are excluded by the requirement that not only the mechanisms but their principles of combination be simple. (Of course simplicity, complexity and interestingness are psychological concepts, at least for now, though we might try to explicate them in terms of Chaitin-Kolmogorov-Solomonoff complexity if we felt like it. My intuition is that this is likely to prove a little simplistic, although Chaitin has an interesting paper that attempts to derive a notion of the "organization" of a system using similar considerations.) And note also that most things that satisfy this definition should also satisfy (4) — due to our feeling that simple principles should have simple consequences (or else complex but uninteresting consequences, like random noise). Any complex, interesting consequence is likely to be non-obvious.
This does indeed fit in with the feeling that emergence is a "something for nothing" phenomenon — though in a more subtle and satisfactory way than set forth in (1), for instance. It's a phenomenon whereby "something stupid buys you something smart". And most of our examples fit. The game of Life and connectionist networks are obvious: interesting high-level behaviour as a consequence of simple dynamic rules for low-level cell dynamics. In evolution, the genetic mechanisms are very simple, but the results are very complex. (Note that there is a small difference, in that in the latter case the emergence is diachronic, i.e. over time, whereas in the first two cases the emergence is synchronic, i.e. not over time but over levels present at a given time.)
We're still not completely there — it's not clear how (C), the operating system example, fits into this paradigm of emergence. But throwing in a smidgen of teleology should get us the rest of the way. I.e., we have to notice that everything here has to be relativized to design. So we design the game of Life according to certain simple principles, but complex, interesting properties leap out and surprise us. Similarly for the connectionist network — we only design it at a low level (though in this case we hope that complex high-level properties will emerge). Whereas in the COBOL case — and in the case of much traditional AI — you only get out what you put in (N.B. I'm not necessarily knocking this: at least here, I'm trying to explicate emergence, not to defend it). And now the operating system example fits in well. The design principles of the system in this case are quite complex — unlike the other cases that fit (5) above — but still the figure "35" is not a part of that design at all. So:
(6) Emergence is the phenomenon wherein a system is designed according to certain principles, but interesting properties arise that are not included in the goals of the designer.
Notice the appearance of the word "goal" — this is important, any design is goal-relative. So the notion now is quite teleological. I notice that Russ Abbott makes a similar point in a recent posting. Notice, however, that as we've conceded that emergence is a psychological property, we're able to construe teleology in a psychological, non-absolute way. So for our purposes here, we only need the appearance of teleology. This is nice, because it allows us to include system where strictly speaking, "design" doesn't apply at all. In evolution, for instance, there is no "designer", but it is easy to treat evolutionary processes as processes of design. On more than one level.
We can view evolution as teleological at the level of the gene — as in Dawkins' theory, for instance. Then the appearance of complex, interesting high-level properties such as intelligence is quite emergent. We also can reconstrue evolution as teleological at the level of the organism (this is perhaps a more straightforward Darwinian view of things). On this construal, the most salient adaptive phenomena like intelligence are no longer emergent, but the goal of the design process. However, this view does open up the possibility of other kinds of emergent phenomena: firstly, non-selected-for byproducts of the evolutionary process (such as Gould and Lewontin's "Spandrels"); secondly and more intriguingly, it allows an explanation for why consciousness seems emergent. Raw consciousness may not have been selected for, but it somehow emerges as a byproduct of selection for adaptive process such as intelligence.
It's probably foolish to search for a definitive construal of "emergence": like most psychological concepts, it probably is best construed as a "family resemblance" — each of the "definitions" outlined above might play some role. Personally, I'm happiest with a combination of (5) and (6) — with (5) being the "core" variety of emergence, and (6) being a more general variety of which (5) is a special case.
Bedau, M. 1997. Weak emergence. Philosophical Perspectives 11:375-399.
Broad, C.D. 1925. The Mind and its Place in Nature. Routledge.
Chalmers, D.J. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
Chalmers, D.J. 2002. Consciousness and its place in nature.
Chalmers, D.J. & Jackson, F. 2001. Conceptual analysis and reductive explanation. Philosophical Review 110:315-61.
Kim, J. 1992. The nonreductivist's trouble with mental causation. In (J. Heil & A. Mele, eds) Mental Causation. Oxford University Press.
Kim, J. 1999. Making sense of emergence. Philosophical Studies 95:3-36.
McLaughlin, B.P. 1992. The rise and fall of British emergentism. In (A. Beckermann, H. Flohr, & J. Kim, eds) Emergence or Reduction?: Prospects for Nonreductive Physicalism. De Gruyter. |
5f3d2d698c43755a | Take the 2-minute tour ×
How can energy be quantized if we can have energy be measured like in 1.56364, 5.7535, 6423.654 kilo joules, with decimals? Thanks
Also isnt it quantization means energy is represented in bit quantities meaning you can not divide, lets say 1 bit of energy
share|improve this question
3 Answers 3
As far as the first part of your question goes, just having decimals in the number does not mean the energy levels are no longer quantized. Quantization of energy simply means that there are only specific energies that particles can take under certain circumstances. For example, you could say particle A can only have one of the following energies: {1.56364, 5.7535, 6423.654} kJ. Limiting the particle to these three energies is what it is meant by quantization of energy.
Also, there is no smallest bit of energy, for example the kinetic energy of a free particle can take a continuous range.
Mathematically, I am not certain how this is formulated. Off the top of my head, I would wager that any countable set could be considered quantized, but that would include all rationals which are dense in reals, so it really wouldn't be much of a quantization.
share|improve this answer
tl;dr of how it's formulated: energies are eigenvalues of the Hamiltonian operator. The eigenvalues can be discrete (which is the technical term for being limited to selected values) or continuous. Jerry's answer has some more details. – David Z Sep 27 '12 at 4:37
Typically, in quantum mechanics, bound states are quantized and free/scattering states are not. This is because bound states, by the mere fact that they're constrained to a certain area, will have to satisfy certain boundary conditions, and these conditions won't be able to be satisfied in a continuous range.
The classic example of this is the infinite square well potential, where $V(x) = 0$ if $0<x<a$, and $V(x) =\infty$ elsewhere. Then, the particle will have zero probability of appearing outside of the well, and will have to satisfy the zero-potential Schrödinger equation $E\psi = -\frac{\hbar^{2}}{2m}\nabla^{2}\psi$ inside of the well. For simplicity, we'll only consider one-dimensional motion.
In this case, we see right away that the basis states to our solutions have to satisfy $\psi = A\sin\left(\frac{\sqrt{2mE}}{\hbar}x+\phi\right)$, and we also know that the wave function must be continuous, and that it is restricted to be zero for $x<0$ and $x>a$. We can satisfy the first boundary condition by choosing $\phi=0$, but the second one is not satisfied for all values of the energy. Instead, it is necessary that $\frac{\sqrt{2mE}}{\hbar}a=n\pi$, where $n$ is some integer. Thus, the allowed energies of 'pure' states of this system are quantized, and take the values $E_{n} = \frac{n^{2}\pi^{2}\hbar}{2m}$.
For any other bound state, you will find yourself using similar logic about boundary conditions, albiet with much, much more complexity. Note that, however, it is also the case that we can construct a general state out of the energy eigenstates $\Psi = \sum a_{n}\psi_{n}$, and that the expectation value for the energy of $\Psi$ will be $\sum|a_{n}|^{2}E_{n}$, so the values for the "average" value of a state are still allowed to be continuous (and in the case of the infinite square well, can actually take any value greater than the ground state energy).
share|improve this answer
I see that the wave function must be continuous as a postulate, but is there an explanation for why that must be true? – jcohen79 Sep 27 '12 at 4:22
The momentum operator is $-i\hbar\partial/\partial x$. If the wavefunction is spatially discontinuous that would imply infinite momentum. – John Rennie Sep 27 '12 at 9:27
An excellent way to understand how the wave function works in quantum mechanics is to study the model of the hydrogen atom. We can see in this model that the quantum variables $n,l,m$ are effectively variables that determine the shape of the spatial density associated with the detection of an electron. The quantum aspect is that the variables $n,l,m$ are integers, where the continuous aspect is density associated with the wave function. It is important to understand that the density function is a space filling function. This means that there is a value of the function associated with each point in space.
share|improve this answer
Your Answer
|
de15a0f26bd77a15 | Saturday, June 30, 2007
Impressions from the Loops '07
The Loops 2007 this year was held at the University in Morelia, Mexico, in a very nice building (see also Stefan's previous post). The auditorium had a stage with the speaker being in the spotlight, the seats were very comfortable to doze off, and the hallways had very picturesque pillars and arches (see photo to the left).
My head is still spinning a bit, trying to process all the information gathered here (well, it might also be the lack of oxygen in the air, it is rather polluted and the high altitude doesn't help either). It's been my first conference in this community, and I have to say from all the conferences where I've been this year, the Loops has been the nicest experience. The atmosphere has been very welcoming, openminded and constructive.
My talk yesterday on 'Phenomenological Quantum Gravity' (slides here) went well (that is to say, I stayed roughly in the time limit and didn't make any completely embarrasing jokes). Though I wasn't really aware that I would be the only one on this conference speaking about DSR. If I had known, I might have extended my summary of that topic (there was a DSR talk scheduled by Florian Girelli, see picture to the right, but he changed the topic shortly, which I didn't know).
However, after more than a month of traveling, I am sitting here in the inner yard of the hotel and try to remind myself why I go to conferences:
• Because it's just such a nice experience to arrive in a foreign city where nobody understands your language, without any baggage, after a 36 hour trip, with an 8 hour jetlag, having just figured out that the credit card doesn't work, and the hotel doesn't have a reservation for you - and then to find the conference site with familiar faces, the air filled with words like 'propagator', 'manifold' and 'background independence'.
• Because one meets old and new friends, because there's a conference dinner or reception, and plenty of free coffee and cookies.
• Because the registration fee includes a welcome package that usually features a more or less useful bag, a notebook and a pen, and a significant amount of tourist information. Occasionally there are some surprises to that, e.g. this time the bag was a woven shoulder bag, or one of the previous SUSY conferences featured a squeezable brain...
• Because some people present new and so far unpublished results.
• To see and be seen.
• To inspire and get inspired...
• And of course to blog about it ;-)
There has been a significant amount of braiding on this conference (for program and abstracts see here), talks by Yidun Wan, Jonathan Hacket, Lee Smolin and Sundance Bilson-Thompson, the latter shown in the picture below with John Swain
Olaf Dreyer, preparing his talk
Some people from the blogosphere that I've meet in person here. Garrett Lisi and Frank Hellmann:
Alejandro Satz (from Reality Conditions) and Yidun Wan (from Road To Unification)
And here is a photo of Carlo Rovelli and Abhay Ashthekar - pen, notebook and coffee included:
Admittedly, I found the phenomenlogical part on this conference somewhat underrepresented. Indeed, I found myself joking I am the phenomenology of the conference! Likewise, Moshe Rozali (who you might also know from comments on this blog) has been the String Theory of the conference. He gave a very interesting talk about the meaning of background independence.
This afternoon, I am chilling out (okay, actually I am writing referee reports that have been due about a month ago). Below a picture of the hotel's inner yard where I am sitting (taken yesterday, right now it is raining). Tomorrow I am flying back to Canada, and I am really looking forward to sleeping in my own bed. A nice weekend to all of you!
[Try clicking on the photos to get a larger resolution.]
Updates: Chanda just sent me a photo she took yesterday evening. I am very pleased about the truly intellectual expression on my face, must be the glasses.
And Garrett took this nice photo.
Friday, June 29, 2007
Philosophia Naturalis Blog Carnival
I have to admit that it took some time until I understood the idea of the "Philosophia Naturalis Blog Carnival", even more so as the actual blog just contains links to posts at different other blogs. This idea of these "philosophia naturalis" contributions, hosted by a different blog every month, is to publish a collection of interesting posts on topics in the physical sciences that have appeared over the last month. Thus, they provide a selection of noteworthy reading out there, and help to give you an overview over interesting blogs dealing with physics and related topics.
This month's Philosophia Naturalis #11 is hosted by geologist Chris Rowan at Highly Allochthonous, and I am very proud to see this blog represented by both its contributors - with Bee on Kaluza-Klein, and myself about the Bouncing Neutrons!
If you have time to waste over the weekend, there may be worse possibilities to do so than reading some of the posts presented by Chris so cogently according to the 50 orders of magnitude of characteristic length scales they cover. I've enjoyed especially the writings from astronomy and the earth sciences. Indeed, if you are a wannabe amateur geologist like me, you will probably like Chris' blog anyway, and wonder why you have not chosen a subject where you can make cool field trips to Namibia...
Have a nice weekend!
Wednesday, June 27, 2007
Ever heard of of planet Eris?
This is planet Eris, in the outskirts of the Solar System, as seen through the eyes of the Hubble Space Telescope:
Hubble Telescope's Advanced Camera for Surveys has taken this image of Eris in December 2005. The analysis of several such photos yields a diameter of Eris of 2400±100 km, or 1500±60 miles. (Credits: HubbleSite News Release STScI-2006-16, M. Brown)
It's such a faint and blurred blob even in the Hubble Space Telescope because it is quite small, and far away from Earth: At the moment, Eris is close to its aphelion, at a distance of 97 astronomical units - that's 97 times the mean distance of the Earth from the Sun! You can explore its eccentric and highly inclined orbit with this applet from the JPL.
In fact, according to the new definition of the International Astronomical Union from last August, Eris is not a planet, but only a dwarf planet. However, it is larger than Pluto, and it has more mass than Pluto, as was reported in a beautiful short note in Science two weeks ago [Ref 5]! So, for all those who prefer to stick to the old definition of a planet, it may be the true ninth planet - unless some other, even larger guy shows up from out there in the Kuiper belt.
Eris was discovered in October 2003 by astronomers Michael Brown, Chad Trujillo, and David Rabinowitz [Ref 1]. It was given the provisional name 2003 UB313, or Xena for short. Soon after the discovery, its size could be measured by observations with a radio telescope of the Max-Planck Society [Ref 2] and the Hubble Space Telescope [Ref 3], and it came out that the new planet is larger than Pluto!
Moreover, there is a small moon in orbit around Eris, which was given the nickname Gabrielle. On September 14, 2006 the International Astronomical Union made official the names Eris for the planet and Dysnomia for its satellite.
A comparison of the sizes of (left to right) Eris, Pluto and Charon, the Moon, and the Earth. Eris' moon Dysnomia is not shown, but it is much smaller than Eris (diameter: 2400±100 km from HST, 3000±400 km from radio observation), Pluto (2300 km), Charon (1200 km), the Moon (3500 km), and the Earth (12800 km). (Credits: Max-Planck Gesellschaft Press Release News/SP/2006(10), Frank Bertoldi)
The discovery of Eris, and of several other large objects in orbits beyond Neptune, had spurred the hot debate about what is a planet, which then lead to the degradation of Pluto from planet to "dwarf planet" in August 2006. So it is fitting that Eris is the Greek goddess of strife and discord - Dysnomia, her daughter, is the goddess of lawlessness.
But in a time when ancient Greek gods and goddesses are reduced to large balls of rock and gas in space, even the goddess of lawlessness is subject to Newton's universal law of gravitational attraction. And this allows, in an elementary and elegant, classical way, to determine the mass of Eris.
The photo on the left is an image of Eris and its moon Dysnomia, the small spot left of Eris, taken with the Hubble Space Telescope on 30 August 2006. Dysnomia's projected orbit around Eris is superimposed on this photo in the image on the right (Credits: HubbleSite News Release STScI-2007-24, M. Brown)
In a first step, the orbit of Dysnomia around Eris has been determined from several observations with the Hubble Space Telescope and the Keck telescope in Hawaii.
The orbit of Dysnomia, the moon of Eris, as reconstructed from observations using the Keck telescope on 20, 21, 30, and 31 August 2006 and the Hubble Space Telescope on 3 December 2005 and 30 August 2006. Observations are show as crosses, the predicted positions at the time of observations are shown by circles. The solid circle in the center is 10 times the actual angular size of Eris. (Fig. 1 from Brown and Schaller, Science 316 1585 (2007 June 15), doi:10.1126/science.1139415. Reprinted with permission from AAAS.)
The observation of Dysnomia as shown in this figure reveals an apparent diameter of the orbit of roughly 1.1 arcsec, which, at the distance of 97 Astronomical units, or 97 × 150 million km ≈ 1.46·1010 km, corresponds to a diameter of 1.46·1010 × 1.1 × 2π / (360 × 3600) km ≈ 78000 km. A more detailed analysis yields an indeed circular orbit with a semimajor axis of r = 37500±200 km.
Now, this can be used to deduce the mass ME of Eris!
Using elementary Newtonian mechanics, we equal the gravitational attraction between Eris and Dysnomia at distance r with the centrifugal force,
GMEmD/r2 = mDrω2.
The frequency ω = 2π/T has to be calculated from the orbital period T of Dysnomia. With the equivalence principle at work, the mass of the satellite cancels out, and we can solve for the mass of Eris:
ME = r3ω2/G.
Now, the orbital period of Dysnomia has been determined quite precisely to 15.772±0.002 days. This yields ω = 2π/(15.77 d × 24 h/d × 3600 s/h) ≈ 4.63·10-6 s-1. Newtons constant is 6.673·10-11 m3 kg-1 s-2, and plugging everything together, the mass of Eris comes out as
ME = 1.7·1022 kg.
For comparison, Pluto has a mass of 1.3·1022 kg, the Moon of 7.35·1022 kg, and the Earth of 5.97·1024 kg. This means, the masses of Eris, Pluto, the Moon, and the Earth compare as 1.3 : 1 : 5.6 : 459.
And so, indeed, Eris and Pluto are on equal footing, revealed by the goddess of lawlessness and Newton's law!
Here are some "historical" references about Eris:
1. Discovery of a Planetary-sized Object in the Scattered Kuiper Belt, by M.E. Brown, C.A. Trujillo, and D.L. Rabinowitz, The Astrophysical Journal 635 L97-L100 (2005 December 10) [PDF, ADS entry], arXiv:astro-ph/0508633 [see also: Eris webpage by Michael Brown]
2. The trans-neptunian object UB313 is larger than Pluto, by F. Bertoldi, W. Altenhoff, A. Weiss, K.M. Menten and C. Thum, Nature 439 563-564 (2006 February 2) [PDF, ADS entry], doi:10.1038/nature04494 [see also: Press release by the Max-Planck Gesellschaft, comment by Frank Bertoldi]
3. Direct Measurement of the Size of 2003 UB313 from the Hubble Space Telescope, by M.E. Brown, E.L. Schaller, H.G. Roe, D.L. Rabinowitz, and C.A. Trujillo, The Astrophysical Journal 643 L61-L63 (2006 May 20) [PDF, ADS entry] [see also: Press Release by the HubbleSite]
4. Satellites of the Largest Kuiper Belt Objects by M.E. Brown et al., The Astrophysical Journal 639 L43-L46 (2006 March 1) [PDF, ADS entry], arXiv:astro-ph/0510029 [see also: Press Release by the Keck Observatory, News Item by Marcos van Dam, Dysnomia, the moon of Eris - webpage by Michael Brown]
5. The Mass of Dwarf Planet Eris by Michael E. Brown and Emily L. Schaller, Science 316 1585 (2007 June 15), doi:10.1126/science.1139415 [see also: Press Release by the HubbleSite News item by the Planetary Society, Press Release by the Keck Observatory]
TAGS: , ,
Monday, June 25, 2007
Loops'07 in Morelia
(Credits: Wikipedia)
Sent from a researcher in motion
Saturday, June 23, 2007
The GZK cutoff
So stay tuned...
, ,
Thursday, June 21, 2007
Random Sampling: Scientific American, October 1960
it's all "rocket science":
Monday, June 18, 2007
Save Money on Health Insurance
Mumbai, India, June 17th 2007: Pradeep Hode, a 30 year-old from Diva in Thane, chronic patient of tuberculosis had to undergo an emergency surgery on Friday morning during which 117 coins where removed from his stomach. Based on hearsay and some bizarre logic of his own, the man started swallowing coins, hoping that the heavy metal would cure him.
(From the Times of India, In search of cure, tuberculosis patient swallows 117 coins.)
This must be a black day for those who think money cures everything...
Saturday, June 16, 2007
Trains and Airplanes
Taking the plane is usually a fast, and often a quite convenient way of travelling. But sometimes, thunderstorms interfere with the flight plan, and then it can happen that all flights but one from Munich to Frankfurt are cancelled. And if you have bad luck, you are sitting at the airport and wait and wait ... only to hear that you have either to spend the night there, or take a train, which, if you would have done that earlier, you have brought to your destination already since hours... That's what has happened to Bee yesterday - and that's why I'll meet her now this morning at the train station, instead of the airport yesterday evening.
Meanwhile her cellphone ran out of battery, so here is the last mail (rough translation):
From: sabine[@]
To: scherer[@]********.de
Subject: still in munich
I missed the onward flight to Frankfurt and was rebooked to the next flight. Right now there's a really impressive thunderstorm outside, one can't see anything except loads of water running down the window and an occasional lightning. Unfortunately, it seems our aircraft was struck by a lightning during touch down and it's not yet clear whether it can take off again [it turned out later it had to go out of service].... *argh* there was just an announcement that the airport is temporarily closed. Will try to find an outlet, I am running out of battery... say hello to the blogosphere ;-)
To avoid that I've forgotten how she looks like after all these delays, she has send me a recent photo taken at the Warsaw conference vine and cheese reception - one of the more important events at every conference:
Thanks to Akin Wingerter for the photo - and I am off for the train station.
Friday, June 15, 2007
Positively Crazy
Some inspiriation for your next seminar:
"better get used to staying up all night" - yo, man.
Wednesday, June 13, 2007
I meant to tell you something about the String Pheno, but the slides are still not online. So, instead just a lovely photo from the old part of the village, Frascati, where it took place. I am on my way back to Italy, this time to Trieste where I will attend the workshop From Quantum to Emergent Gravity: Theory and Phenomenology.
Tuesday, June 12, 2007
Blog Life
My husband just sent me an email, reporting that a former colleague pointed out this very interesting article at
which is part of a monthly column:
You see, we are in good company :-) I am very flattered by such honor and find the text remarkably accurate (well, I didn't know how to spell 'Landolt-Börnstein').
The mentioned Aero chocolate is here, and the 'humorous take on the recent debate over the status of string theory' is here. The 'lengthy and thoughtful posts' as well as the inpiration series you find in the sidebar.
Monday, June 11, 2007
Can you spot a fake smile? Take the test.
Especially recommendable when sitting in the afternoon session while the sun outside is shining brightly...
Why Are There Always So Many Other Things To Do?
Distractions, Like Butterflies Are Buzzing 'Round My Head...
~ Paul McCartney, Distractions
Saturday, June 09, 2007
Femme fatale, post mortale
Oh well, I promised you photos from Europe, didn't I? Here's how you look like if you hang above a door frame for too long ;-)
Seen somewhere in Rome around here, close by the Piazza del Quirinale. Still think you like buildings where you can feel the presence of the ancestors?
Friday, June 08, 2007
Hello from Warsaw!
And here I am in Warsaw for the next conference, the Planck 2007. I am very happy to report that this time my-stupid-bag arrived with me. However, at the airport I noticed that I managed to book what I thought was a hotel without having a street address. It took me some back and forth to find it out (since my Polish is even worse than my Italian). The taxi driver dropped me off and pointed vaguely into a direction since the address turned out to be in a pedestrian-only zone. After pulling my-stupid-bag through several cobblestone roads, I found the house. There I was explained that it's not an hotel at all. They rent apartments in 'the old town', and mine would actually be in another building, and also, would I please pay in advance, preferably in cash.
I probably didn't exactly contribute to Germany's good reputation by simply refusing to pay anything before I had seen the apartment. However, I shouldn't have worried. The apartment is great, larger than mine at home, has a living room as well as a completely equipped kitchen including washing machine and dishwasher. And is less expensive than every hotel I could find in this area (I was pretty late with booking).
The only drawback is the absence of any internet connection. (The reason being 'there are mostly older ladies staying with us'). Right now, I am sitting on a bench in the middle of Warsaw's old town blogging over an open access wireless that has a pretty good bandwidth. There is a small fountain in the middle and the place is surrounded by lovely colorful old houses, most of which have restaurants. Lots of people are sitting here, chatting, having dinner, drinking beer. Pigeons are hopping around, children are playing with the water in the fountain. A guy behind me just started playing the violin, its incredibly bad. Oh no! I recognize that song 'Und wenn wir alle Englein wären....'.
I spent a considerable amount of time trying to get some fruits or vegetables, but none of the grocery stores I found had any. On the other hand they had a truly impressive selection of sausages, bacon and all other kinds of very dead looking meat. I figure it's not a good place for a vegetarian. The only person who I could find on the street who spoke English was a German and explain he had only arrived two hours earlier, and whether I had an idea where to change Euro in Zloty (the stores still don't take Euro). Ah, the guy with the violin moved on. Now there's somebody with an accordion approaching...
I just love to sit an watch people. Meanwhile it has gotten dark and next to the fountain is a women juggling with torches. Unfortunately, I can see lots of clouds in the north and it looks like rain. Plus, there are plenty of mosquitoes, and the accordion isn't convincing either. So I'll pack my laptop and try to get some sleep to be fit for all the physicists tomorrow.
PS: Ach, und könnt mir vielleicht jemand sagen was 'Wo bitte bekomme ich Damenbinden?' auf Polnisch heisst und wie ich das ausspreche?
Thursday, June 07, 2007
The Early Extra Dimensions
I have been fascinated by the idea of extra dimensions (XDs) long before I finished high school. I, as apparently many other theoretical physicists, was a science fiction fan then. Besides randomly reading everything labeled with 'SF', I made my way through a considerable part of the Perry Rhodan series, which I loved for picking up political topics and projecting them into the future [1]. Admittedly, the technical details somehow escaped me, especially when it came to time travel and the hyperspace.
This is just to say that the topics of hyperspace and XDs have inspired generations of physicists. And whoever it was who first did the calculation that shows string theory needs extra dimensions to make sense, it must have been one of the most exciting moments I can imagine for a theoretical physicist.
But XDs have come a long way, and were around long before string theory. People sometimes ask me why my talks never mention the earlier works on the topic. The reason is that the theories with XDs proposed in the 1920ies by Theodor Kaluza and Otto Klein, are in their idea different to the 'modern' XDs. Yet, this usually takes too much time to clarify in a talk, so I rather skip it. However, since you - and yes, I mean YOU who you are just raising your eyebrows - are of course the most attentive reader there is, I want to elaborate somewhat on these 'early' XDs since I noticed very little people actually read the original works by Kaluza and Klein.
General Relativity
The first mentioning of adding another dimensions to our three space-like dimensions that we experience every day goes to my knowledge back to Nordström in 1913 [2]. He however did not yet use General Relativity (GR) to build his theory upon. Since we know today that the gravitational potential is not a scalar field, but described by the curvature of space time, let us skip to the next attempt which uses GR as we know it today.
GR couples the metric tensor (g) to a source term of matter fields, whose characteristics are encoded in the stress-energy tensor of the matter. All kind of energy and matter results in such a source term, and hence causes the metric to deviate from flat space. This theory does not say anything about the origin of the source terms. The matter and its properties have to be described by another theory - for example by electrodynamics. Electrodynamics on the other hand has a similar problem. The source for the electromagnetic field (charged particles) is not described by Maxwell's equations [5]. They need to be completed by further equations, e.g. the Dirac equations.
In the beginning of the last century, physicists had just understood gravity as a geometrical effect instead of a field in Minkowski space, so it was only natural to try the same for other fields as well, with the obvious next choice being the electric field. The idea of the early XDs is plain and simple. Einstein's field equations are a set of non-linear differential equations for the metric tensor. They are built up of the Ricci-tensor (two indices) which is a contraction over the full curvature tensor (four indices), and the curvature scalar - a further contraction of the Ricci-tensor. Such a contraction is basically a sum over two indices. The indices on these tensors label space-time directions - that is, in the standard case of GR with three space and one time dimensions, they run from 1 to 4 (or, depending on taste, sometimes from 0 to 3).
Now if one had an additional dimension, then two things happen with Einstein's field equation. First, one has more equations because there are more free indices. Since the Ricci tensor and the metric are symmetric, the number of independent equations is D(D+1)/2, here D is the total number of dimensions. The second thing happening is that the equations with the indices belonging to the 'usual' directions acquire additional terms since the sum runs over the additional indices as well. The trick is then to separate the usual part (sum from 1 to 4) from the additional part (sum over the extra dimension), shift the additional part to the other side of the equations, and read it as a source term. In such a way, one obtains a source term even if the higher dimensional field equations were source free.
Kaluza and Klein
The result is that components of the higher-dimensional metric tensor appear as source terms for the four-dimensional sub-sector that we observe. The first such approach was Theodor Kaluza's whose ansatz uses one additional dimension. In the remaining entries of the metric tensor (those with one index being a 5) he put the electromagnetic potential with a coupling constant alpha (since the metric tensor is dimensionless but the electromagnetic potential isn't)
(Here, the large Latin indices run over all dimensions, the small Greek indices over the usual four dimensions). Kaluza apparently sent a draft of his paper to Einstein in 1919, to ask for his opinion. It got published with a delay of two years [3].
Kaluza derived the higher dimensional field equations in the linear approximation. Generically, all the components of the metric tensor will be functions of all coordinates, including the additional one. This however is in conflict with what we observe. Kaluza therefore added what he called the 'cylinder condition' that set derivatives with respect to the additional coordinates to zero. In the linear approximation, he then found the ansatz to reproduce GR plus electrodynamics.
However, the use of this linear approximation is not necessary, as was shown by Oskar Klein five years later [4]. Klein used a different ansatz for the metric which has an additional quadratic term:
(Sorry, coupling constant is missing, my fault not Oskar's) And he assumed the additional coordinate is compactified on a circle. Then, one can expand all components in a Fourier-series and the zero mode will fulfill Kaluza's cylinder condition' that is, it is independent of the fifth coordinate. However, if you compare both ansätze [7], Klein's and Kaluza's, you will notice that Klein set the g55 component to be constant to one. This is an additional constraint that generally will not be fulfilled. In fact, the additional entry behaves like a scalar field and describes something like the radius of the XD. At this time however, people had little for additional scalar fields.
Klein's derivation is simply one of the most beautiful calculations I know. One just writes down the higher dimensional field equations, parametrizes the metric tensor according to Klein's ansatz, decomposes the equations - and what comes out is GR in four dimensions (in the Lagrangian formulation as well as the field equations), plus the free Maxwell equations.
(Here, the supscript (4) and (5) refer to the 4 and 5 dimensional part of the curvature/metric). Further, the geodesic equation gets an additional term which is just the Lorentz force term and thus describes a charged particle moving in a curved space with an electromagnetic field.
In the course of this derivation, one is lead to identify the momentum in the direction of the fifth coordinate as the ratio of charge over mass (q/m). It can be shown that this quantity is conserved as it should be. Klein concluded that this charge is quantized in discrete steps (this is a geometrical quantization), the first example of the Kaluza-Klein tower.
Extensions and Problems
To understand the excitement this derivation must have caused one has to keep in mind that this was 30 years before Yang and Mills, and the understanding of gauge theory was not on today's status. With today's knowledge, the argumentation appears somewhat trivial. One adds an additional dimension with U(1) symmetry, the compactified dimension. The resulting theory needs to show this symmetry that we know belongs to electrodynamics. From this point of view, it is only consequential to extend the Kaluza-Klein (KK) approach to other gauge symmetries, i.e. non-abelian groups. This was done in 1968 [6].
One has to note however that for non-abelian groups the curvature of the additional dimensions will not vanish, thus flat space is no longer a solution to the field equations. However, it turns out that the number of additional dimensions one needs for the gauge symmetries of the Standard Model U(1)xSU(2)xSU(3) is 1+2+4=7 [10]. Thus, together with our usual four dimensions, the total number of dimensions is D=11. Now exponentiate this finding by the fact that 11 is the favourite number for those working on supergravity, and you'll understand why KK was dealt as a hot canditate for unification.
But there are several problems with the traditional KK approach. First, meanwhile the age of quantum field theory had begun, and all these considerations have been purely classical and unquantized. Even more importantly, there are no fermions is this description - note that we have only talked about the free Maxwell equations. The reason is easy to see: fermions are spin 1/2 fields and unlike vector bosons one can not just write them into the metric tensor. One can of course add additional source terms, but this makes the idea somewhat less appealing [8]. The high hope had been to explain all matter and fields from a purely geometric approach.
If one thinks more about the fermions, one notices another problem: right- and left-handed fermions belong to different electroweak representations, a feature that is hard to include in a geometrical interpretation. Furthermore, there is the problem of stabilization of the compact extra dimensions (the sizes should not or only negligibly depend on the time-like coordinate), and the problem of singularity formation from GR persists in this approach. However. If I consider what landscape of problems other theories suffer from, it makes me wonder why the KK approach was so suddenly given up in the early 70ies. A big part of the reason might simply have been that the quark model got established, and it was the dawn of the particle-physics era.
The 'modern' extra dimensions differ from the KK approach by not attempting to explain the other standard model fields as components of the metric. Instead, fermionic- and gauge-fields are additional fields that are coupled to the metric. They are allowed to propagate into the extra dimensions, but are not themselves geometrical objects. Most features of the KK approach remain, most notably the geometrical quantization of the momenta into the extra dimensions and thus the KK-tower of excitations. So remains the problem of stabilization, singularities and quantization (for higher dimensional quantum field theories the coupling constants become dimensionful). However, for me this 'modern' approach is considerably less appealing as one has lost the possibility to describe gauge symmetries and standard model charges as arising from the same principle as GR.
But obviously, the largest problem with the KK approaches was - and still is - that it is not clear whether it is just a mathematical possibility or indeed a description of reality. As Oskar Klein put it in 1926:
"Ob hinter diesen Andeutungen von Möglichkeiten etwas Wirkliches besteht, muss natürlich die Zukunft entscheiden."
"Whether these indications of possibilities are built on reality has of course to be decided by the future." [9]
[1] And if you read the wikipedia entry on Perry Rhodan you find a use for the word 'Zeitgeist'...
G. Nordström, "Zur Theorie der Gravitation vom Standpunkt des Relativitätsprinzips" Annalen der Physik, vol. 347, Issue 13, pp.533-554 (1913); G. Nordström, "Über die Möglichkeit, das elektromagnetische Feld und das Gravitationsfeld zu vereinigen" ( "About the possibility to unify the electric field and the gravitational field" ) Physik. Zs. 15, 504-506 (1914) [Abstract]
[3] T. Kaluza, "Zum Unitätsproblem der Physik'' ("On the Problem of Unity in Physics, Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys.) (1921) 966.
[4] O. Klein, "Quantentheorie und fünfdimensionale Relativitätstheory" ("Quantum Theory and fivedimensional General Relativity", Z. Phys. 37, 895 (1926).
[5] Unlike to what the Wikipedia entry states, the Lorentz force law can not be derived from the Maxwell equations without further assumptions (like a Lagrangian for the coupled sources). E.g. Maxwell's equations are perfectly consistent for a static superposition of two negativley charged objects, just that we know the charged particles would repel and the configuration can't be static.
[6] R. Kerner, "Generalization of the Kaluza-Klein theory for an arbitrary non-abelian gauge group", Ann. Inst. Henri Poincare, 9, 143-152 (1968)
[7] Contrary to the wide spread believe, the plural of the German word 'Ansatz' is not 'Ansatzes' but 'Ansätze' (pronounced 'unsetze'). 'Ansatz' could be roughly translated as 'a good point to start', or a preparation. E.g. the pre-stage for yeast dough is called 'Ansatz'...
[8] Which finally brings us to the topic on which I lost two years during my Ph.D. time, namely the question whether one can built up the metric tensor from spin 1/2 fields. I only learned considerably later that most of this approach had been worked out in the mid 1980ies, see e.g. hep-th/0307109 and references therein.
[9] He indeed writes it has to be decided 'by' the future not 'in' the future. Quotation from Ref. [4]
Further Litarature
Hello From Rome!
On the weekend, I flew to Rome to addend the String Pheno 2007. Meanwhile, my baggage decided to have a vacation in Palermo. It arrived with four days delay yesterday evening. I've been wearing the same clothes since the weekend, but this morning I saw myself faced with an incredible selection! A second jeans! Two T-shirts! A dress!
However, despite these inconveniences, I had a so far very pleasant stay since it turned out that Amara Graps (you might know her from the blogosphere) lives nearby. She was so kind to borrow me some clothes, and yesterday we spent a very nice afternoon in Rome. Since I am currently sitting in mentioned conference (and should at least pretend to listen) let me instead show you a photo.
Okay, now I have to prepare my talk... later more...
Wednesday, June 06, 2007
arXiv User Survey
Just a quick link: in case you ever wanted to express your opinion about the arXiv, take their poll (from June 4-8)
It will take approximately 20 minutes, and has plenty of comment options to complain about the new arXiv listing (or the eternal bug in the search field if you search for a tag containing the word 'not').
Other points that I find worth mentioning: the arxiv should allow comments on papers, and a ranking (different from times cited). Comments would be helpful to avoid the increasing amount of 'reply-to-reply-to-reply-to's, ranking I would find a good idea because it's become almost impossible to find a good review or lecture notes if one doesn't know the author (and lecture notes don't usually become top-cites).
Monday, June 04, 2007
The Teddy Factory
A somewhat belated 'Hello' from Germany!
[Figure: Representavive sample of the German population
Friday, June 01, 2007
Bouncing Neutrons in the Gravitational Field
I remember a moment of excitement and puzzlement early on in my first class in quantum mechanics, when our professor announced that now, he would discuss the "freier Fall" in quantum mechanics. I was excited, because it seemed great to me to transfer such an elementary situation as the free fall of a stone into the realm of quantum mechanics, and puzzled, because I knew that the gravitational potential is so extremely weak that it can be safely ignored on scales where quantum mechanics comes into play - at least, in most cases. Alas, that lecture was quite a disappointment, because of an ambiguity of the German wording "freier Fall": it can mean both the free "fall", and the free "case" (as in Wittgenstein's famous dictum "Die Welt ist alles, was der Fall ist.") - and what we learned in our lecture was just about the free case, the quite boring plane wave motion of a quantum particle subject to no potential whatsoever.
What we did not learn in our class was that, even back at that time, there had been several clever experiments with neutrons which demonstrate the influence of the gravitational potential on the phase of the neutron wave function using interferometers. Neutrons, of course, are ideal particles to perform such experiments, since they have no electric charge and are not subject to the influence of the ubiquitous electromagnetic fields.
But only over the last few years, new experiments have been realised that show directly the quantisation of the vertical "free fall" motion of neutrons in the gravitational field of the Earth. I had heard about them some time ago in connection with their possible role for the detection of Non-Newtonian forces, or the modifications of Newtonian gravity at short distances. Then, earlier this year, I heard a talk by one of the experimenters at Frankfurt University, and I was quote fascinated when I followed the papers describing the experiments.
The essential point of these experiments is the following: If you prepare a beam of very slow neutrons - with velocities about 10 m/s - you can make them hop over a reflecting plane much like you can let hop a pebble over the surface of a lake. Then, you can observe that the vertical part of the motion of the neutrons - with velocities smaller than 5 cm/s - is quantised. In fact, one can detect the quantum states of neutrons in the gravitational field of the Earth! Let me explain in more detail...
Free Fall in Classical Mechanics ...
In order to better understand the experiment, let's go back one step and consider the very simple motion of an elastic ball which is dropped on the ground. If the ground is plane and reflecting, and the ball is ideally elastic such that there is no dissipation of energy, the ball will jump back to the height of where it was dropped from, fall down again, jump back, fall, and so on. The height of the ball over ground as a function of time is shown as the blue curve in the left of this figure: it is simply a sequence of nice parabolas.
We can now ask, What is the probability to find the bouncing ball in a certain height above the floor? For example, we could make a movie of the bouncing ball, take a still at some random time, and check the distribution of the height of the ball if we repeat this for many random stills. The result of this random sampling of the bouncing motion of the ball is the probability distribution shown in red on the right-hand side of figure. The probability to find the ball at a certain height in this idealised, "stationary" situation, where the elastic ball is bouncing forever, is highest at the upper turning point of the motion, and lowest at the bottom, where the ball is reflected.
... and in Quantum Mechanics
So much for classical mechanics, as we know it from every-day life. In quantum mechanics, unfortunately, there is not anymore such a thing as the path of a particle, with position and velocity as well-defined quantities at any instant in time. However, it still makes sense to speak of stationary states, and of the probability distribution to find a particle at a certain position. In quantum mechanics, it is the wave function which provides us with this probability distribution by calculating its square. And the law of nature determining the wave function is encoded in the famous Schrödinger equation. The Schrödinger equation for a stationary state is an "eigenvalue equation", whose solution yields, at the same time, the wave function and the value of the energy of the corresponding state. For the motion of a particle in a linear potential - such as the potential energy mgx of a particle with mass m at height x above ground in the gravitational field with acceleration g at the surface of the Earth - it reads
In some cases, there are so-called "exact solutions" to the Schrödinger equation - wave functions that are given by certain functions one can look up in thick compendia, or at MathWorld. These functions usually are some beautiful beasts out of the zoo of so-called "special functions". Such is the case for the motion of a particle in a linear potential, where the solution of the Schrödinger equation is given by the Airy function Ai(x). Interestingly, this function first showed up in physics when the British astronomer George Airy applied the wave theory of light to the phenomenon of the rainbow...
Quantum States of Particles in the Gravitational Field
As a result of solving the Schrödinger equation, there is a stationary state with a minimal energy - the ground state - and a series of excited states with higher energies. Here is how the wave function of the second excited state of a particle in the gravitational field looks like as a function of the height above ground:
The wave function, shown on the left in magenta, oscillates through two nodes, and goes down to zero exponentially above the classical limiting height, which corresponds to the upper turning point of the parabola of a classical particle with the same energy. For neutrons in this state, this height is 32.4 µm above the plane. The green curve on the right shows the probability density corresponding to the wave function. It is quite different from the classical probability density, shown in red. As a characteristical property of a quantum system, there is, besides the two nodes, a certain probability to find the particle above the classical turning point. This is an example of the tunnel effect: there is a chance to find a quantum particle in regions where by the laws of classical physics, it would not be allowed to be because of unsufficient energy.
However, going from the ground state to ever higher excited states eventually reproduces the probability distribtion of classical physics. This is what is called the correspondence principle, and you can see what it means if you have a look at the wave function for 60th excited state: Here, the probability distribution derived from the quantum wave function follows already very closely the classical distribution.
So far, we have been talking about theory: the Schrödinger equation and its solutions in guise of the Airy function. There is no reason at all to doubt the validity of the Schrödinger equation: it has been thoroughly tested in innumerable situations, from the hydrogen atom to solid state physics. However, in all these situations, the interaction of the particles involved is electromagnetic, and not by gravitation. For this reason, it is extremely interesting to think about ways to check the solution of the Schrödinger equation for particles in the gravitational field. As we have seen before, the best way to do this is to work with neutrons, in order to avoid spurious electromagnetic effects.
Bouncing Neutrons in the Gravitational Field
Unfortunately, it is so far not possible to scan directly the probability distribution of neutrons in the gravitational field. However, in a clever experimental setup, one can look at the transmission of neutrons through a channel between with a horizontal reflecting surface where they can bounce like pebbles over a lake, and an absorber ahead. This is a rough sketch of the setup:
The decisive idea of the experiment is to vary the height of the absorber above the reflecting plane, and to monitor the transmission of neutrons as a function of this height. If the height of the absorber is to low, the ground state for the vertical motion of the neutrons does not fit into the channel, and no neutrons will pass the channel. Transmission sets in once the height of the channel is sufficiently large to accommodate the ground state wave function of the vertical motion of the neutron. Moreover, whenever with increasing height of the channel, one more of the excited wave function fits in, the transmission should increase. The first of these steps, and the corresponding wave functions and probability densities, are shown in this figure:
The interesting point now is, can this stepwise increase of transmission be observed in actual experimental data? Here are measured data, and indeed - the first step is clearly visible, and the second and third step can be identified:
Adapted by permission from Macmillan Publishers Ltd: Nature (doi:10.1038/415297a), copyright 2002.
This has been the first verification of quantised states of particles in the gravitational field!
What can be learned
You may wonder if the experiment may not have shown just some "particle in a box" quantisation, since the channel for the neutrons formed by the reflecting plane and the absorber may make up such a box. This objection has been raised, indeed, in a comment paper, and has been answered by detailed calculations, and improved experiments: the conclusion about quantisation in the gravitational field remains fully valid!
However, limits about modifications of Newtonian gravity from this experiment remain restricted. Such a modification would change the potential the neutrons are moving in. For example, a short-range force caused by the matter of reflecting plane could contribute to the potential of the neutrons. However, as comes out, such an additional potential would be very weak and have nearly no influence at all on the overall wave function of the neutron.
Moreover, it is clear that in this experiment, the gravitational field is always a classical background field, which itself is not quantised at all. There may be the possibility that a neutron undergoes a transition from, say, the second to the first quantised state, thereby emitting a graviton - similar to the electron in an atom, which emits a photon when the electron makes a transition. Unfortunately, this probability is so low that it is not reasonable to expect that it may ever be measured....
But all these restrictions do not change at all the main point that this a very exciting, elementary experiment, which could find its way into textbooks of quantum mechanics!
Here are some papers about the "bouncing neutron" experiment:
Quantum states of neutrons in the Earth's gravitational field by V.V. Nesvizhevsky, H.G. Boerner, A.K. Petoukhov, H. Abele, S. Baessler, F. Ruess, Th. Stoeferle, A. Westphal, A.M. Gagarski, G.A. Petrov, and A.V. Strelkov; Nature 415 (2002) 297-299 (doi: 10.1038/415297a) - The first description of the result.
Measurement of quantum states of neutrons in the Earth's gravitational field by V.V. Nesvizhevsky, H.G. Boerner, A.M. Gagarsky, A.K. Petoukhov, G.A. Petrov, H.Abele, S. Baessler, G. Divkovic, F.J. Ruess, Th. Stoeferle, A. Westphal, A.V. Strelkov, K.V. Protasov, A.Yu. Voronin; Phys.Rev. D 67 (2003) 102002 (doi: 10.1103/PhysRevD.67.102002 | arXiv: hep-ph/0306198v1) - A more detailed description of the experimental setup and the first results.
Study of the neutron quantum states in the gravity field by V.V. Nesvizhevsky, A.K. Petukhov, H.G. Boerner, T.A. Baranova, A.M. Gagarski, G.A. Petrov, K.V. Protasov, A.Yu. Voronin, S. Baessler, H. Abele, A. Westphal, L. Lucovac; Eur.Phys.J. C 40 (2005) 479-491 (doi: 10.1140/epjc/s2005-02135-y | arXiv: hep-ph/0502081v2) - Another more detailed discussion of the experimental setup, possible sources of error, and the first results.
Quantum motion of a neutron in a wave-guide in the gravitational field by A.Yu. Voronin, H. Abele, S. Baessler, V.V. Nesvizhevsky, A.K. Petukhov, K.V. Protasov, A. Westphal; Phys.Rev. D 73 (2006) 044029 (doi: 10.1103/PhysRevD.73.044029 | arXiv: quant-ph/0512129v2) - A long and detailed discussion of point such as the "particle in the box" ambiguity and the role of the absorber.
Constrains on non-Newtonian gravity from the experiment on neutron quantum states in the Earth's gravitational field by V.V. Nesvizhevsky, K.V. Protasov; Class.Quant.Grav. 21 (2004) 4557-4566 (doi: 10.1088/0264-9381/21/19/005 | arXiv: hep-ph/0401179v1) - As the title says: a discussion of the constraints for Non-Newtonian forces.
Spontaneous emission of graviton by a quantum bouncer by G. Pignol, K.V. Protasov, V.V. Nesvizhevsky; Class.Quant.Grav. 24 (2007) 2439-2441 (doi: 10.1088/0264-9381/24/9/N02 | arXiv: quant-ph/07702256v1) - As the title suggests: the estimate for the emission of a graviton from the neutron in the gravitational field.
TAG: , |
4f5690e785ee0e56 | Take the 2-minute tour ×
I have seen similar posts, but I haven't seen what seems to be a clear and direct answer.
Why do only a certain number of electrons occupy each shell? Why are the shells arranged in certain distances from the nucleus? Why don't electrons just collapse into the nucleus or fly away?
It seems there are lots of equations and theories that describe HOW electrons behave (pauli exclusion principle), predictions about WHERE they may be located (Schrödinger equation, uncertainty principle), etc. But hard to find the WHY and/or causality behind these descriptive properties. What is it about the nucleus and the electrons that causes them to attract/repel in the form of these shells at regular intervals and numbers of electrons per shell?
Thank you! Please be patient with me, new to this forum and just an amateur fan of physics.
share|improve this question
Well the answer is "quantum mechanics" and conservation laws like conservation of angular momentum combined with quantized angular momentum. Your question is seriously too broad though so I can't imagine it being realistically answered without writing a whole book chapter worth of information. – Brandon Enright Aug 2 '14 at 6:10
Physics does not answer WHY questions, the models physics has answer how from the postulates and equations the observations can be explained. This has been done successfully to start with using the Schrodinger equation and identifying its solutions with the shells very well. Why it is successful? eventually ask the gods. – anna v Aug 2 '14 at 7:50
Thank you all, very useful answers! In the future, I think I'll rephrase my questions to "what causes..." or "how does it work..." vs "why." I think Physicists seem a bit scared or put off by "why" questions as it somehow leads to the philosophical =) – PurposeNation Aug 4 '14 at 20:52
5 Answers 5
up vote 15 down vote accepted
Any answer based on analogies rather than mathematics is going to be misleading, so please bear this in mind when you read this.
Most of us will have discovered that if you tie one end of a rope to a wall and wave the other you can get standing waves on it like this:
Standing waves
Depending on how fast you wave the end of the rope you can get half a wave (A), one wave (B), one and a half waves (C), and so on. But you can't have 3/5 of a wave or 4.4328425 waves. You can only have a half integral number of waves. The number of waves is quantised.
This is basically why electron energies in an atom are quantised. You've probably heard that electrons behave as waves as well as particles. Well if you're trying to cram an electron into a confined space you'll only be able to do so if the electron wavelength fits neatly into the space. This is a lot more complicated than just waving a rope because an atom is a 3D object so you have 3D waves. However take for example the first three $s$ wavefunctions, which are spherically symmetric, and look how they vary with distance - you get (these are for a hydrogen atom) $^1$:
s wavefunctions
Unlike the rope the waves aren't all the same size and length because the potential around a hydrogen atom varies with distance, however you can see a general similarity with the first three modes of the rope.
And that's basically it. Energy increases with decreasing wavelength, so the "half wave" $1s$ level has a lower energy than the "one wave" $2s$ level, and the $2s$ has a lower energy than the "one and a half wave" $3s$ level.
$^1$ the graphs are actually the electron probability distribution $P(r) = \psi\psi^*4\pi r^2$. I did try plotting the wavefunction, but it was less visually effective.
share|improve this answer
This is a perfect answer at the level of the OP's question. – Ben Crowell Aug 2 '14 at 15:45
"Most of us will have discovered that if you tie one end of a rope to a wall and wave the other you can get standing waves on it" - huh. Is that a common thing for people to just do when they're bored? – user2357112 Aug 2 '14 at 16:30
@user2357112 didn't you have a jump rope as a kid? And then one time you only have 1 friend available, so you tie one end to a fence, you wave your end around a couple of times.. and funny stuff is observed. – harold Aug 2 '14 at 16:39
@Harold Jump rope? My phone doesn't have that app. – zibadawa timmy Aug 2 '14 at 18:34
@harold Kids playing with Lego become engineers. Kids with a piece of rope and only 1 friend become physicists :-D But it's a great answer! – jmiserez Aug 2 '14 at 22:44
First of all, strictly speaking, electron shells (as well as atomic orbitals) do not exist in atoms with more than one electron. Such physical model of an atom is simplified (and often oversimplified), it arises from a mathematical approximation, which physically corresponds to the situation when electrons do not instantaneously interact with each other, but rather each and every electron interacts with the average, or mean, electric field created by all other electrons.
This approximation is known as mean field approximation, and the state (or, speaking classically, the motion) of each and every electron in this approximation is independent of the state (motion) of all other electrons in the system. Thus, the physical model which arises due to this approximation is simplified, and, not surprisingly, it is often referred to as independent electrons model.
So, the question why nature works in this way, does not make a lot of sense, since nature actually does not work this way. Except for systems with only one electron, like, for instance, hydrogen atom. In any case the answer to the question why something works in this or that way in physics is pretty simple: according to the laws of a particular physical theory, say, quantum mechanics. And I could not explain to you quantum mechanics here in just a few sentences. You need to read some books.
But if your question is why nature works in this way according to quantum mechanics, i.e. why things in quantum mechanics are the way they are, then I would like to quote Paul Dirac:
[...] the main object of physical science is not the provision of pictures, but is the formulation of laws governing phenomena and the application of these laws to the discovery of new phenomena. If a picture exists, so much the better; but whether a picture exists or not is a matter of only secondary importance. In the case of atomic phenomena no picture can be expected to exist in the usual sense of the word 'picture', by which is meant a model functioning essentially on classical lines. One may, however, extend the meaning of the word 'picture' to include any way of looking at the fundamental laws which makes their self-consistency obvious. With this extension, one may gradually acquire a picture of atomic phenomena by becoming familiar with the laws of the quantum theory.
From "The Principles of Quantum Mechanics", §4.
share|improve this answer
A big part of it can be explained by combining the constraints of quantum mechanics with the geometry of angular momentum.
For the special case of the hydrogen atom, it turns out that when you solve the equations of motion for an electron near a proton, you can't give the electron any old energy. There's a set of energies that are allowed; all others are excluded. You can put these energies in order, starting from the most tightly bound, and give each one a number. This is often called the "principal quantum number," $n$, and it can be any positive integer. The binding energy of an electron in the $n$-th state is $13.6\,\mathrm{eV}/n^2$.
You can also ask (again, using the mathematical tools of quantum mechanics) whether the electron can carry angular momentum. It turns out that it can, but again that the amount of angular momentum it can carry comes in lumps, and again we can put the angular momentum states in order, starting with the least. Unlike with the principal quantum number, it makes sense to talk about an atom whose angular momentum is zero, so the "angular momentum quantum number" $\ell$ starts counting from zero. For a very sneaky reason, $\ell$ must be smaller than $n$. So an electron in its ground state, $n=1$, must have $\ell=0$; an electron in the first excited state $n=2$ may have $\ell=0$ or $\ell=1$; and so on.
Now once you have started to ask about angular momentum you start to think about planets orbiting a star, and that suggests a question: what is the orientation of the orbit? Must all the electrons orbit in the same plane, like all the planets in the solar system are found roughly along the plane of the ecliptic? Or can electrons orbiting a nucleus occupy any random plane, the way that comets do? This is a question you can also address with quantum mechanics. It turns out (again) that only certain orientations are allowed, and the number of orientations that are allowed depends on $\ell$, and that you can put the orientations in order. For a state with $\ell=0$ there is only one orientation permitted. For a state with $\ell=1$ there are three orientations permitted; sometimes it makes sense to number them with the "angular momentum projection quantum number" $m \in \{-1,0,1\}$, and other times it makes sense to identify them with the three axes $x,y,z$ of a coordinate system. For $\ell=2$, likewise, it sometimes makes sense to identify orientations $m \in \{-2,-1,0,1,2\}$, and other time to identify the orientations with electrons along the axes and planes of the coordinate system. I think the chemists may even have a geometrical interpretation for the seven substates of $\ell=3$, but I'm not familiar with it.
When you start to add multiple electrons to one nucleus, several things change — most notably the interaction energy, since the electrons interact with each other as well as with the nucleus. The basic picture, that each electron must carry integer angular momentum $\ell$ which may lie on any of $2\ell+1$ directions, remains unchanged. But there is one final quirk: each state with a given $n,\ell,m$ may hold no more than two electrons! We can fit this into our picture by assigning each electron a fourth quantum number $s$, called the "spin quantum number" for reasons that you should totally look up later, which can only take two values. Now we have a very simple rule: a "state" described by the four numbers $n,\ell,m,s$ can hold zero or one electrons at a time.
After that preamble, have a look at a periodic table: a periodic table
• Over on the left are two columns of highly reactive elements. These have the outermost electron with $\ell=0$ (one value of $m$ allowed, two values of $s$).
• Over on the right are six columns of (mostly) nonmetals. These have the outermost electron with $\ell=1$ (three values of $m$ allowed, times two values of $s$)
• In the middle are ten columns of metals. These have outermost electrons with $\ell=2$ (five values of $m$ allowed, times two values of $s$).
• Appended on the bottom of the chart, because there's too much blank space on the page if they're inserted between columns two and three, are fourteen columns of lanthanides and actinides. These have outermost electrons with $\ell=3$ (seven values of $m$, times two values of $s$).
This simple model doesn't explain everything about the periodic table and electron shells. My description puts helium in the wrong spot (it's not a reactive metal because the most tightly bound electron shell is special), and the heavier metals leak over into the $\ell=1$ block. You have to do some serious modeling to understand why the $\ell=2$ electrons aren't allowed until the fourth row, rather than the third row. Protons and neutrons in the nucleus have the same sort of shell structure, but nuclear magic numbers don't always occur after the filling of an $\ell=1$ shell the way the noble gases do. But that is about the shape of things.
share|improve this answer
John Rennie gave a nice answer based on the De Broglie hypothesis, however he didn't try the hard part: "Why do only a certain number of electrons occupy each shell?" so let me try!
In quantum mechanics particles are described by wave functions. All the observable properties of a particle (like its position) are related to the square of the wave function, so its sign does not really matter.
You can write a global wave function for a system of more particles. Let's consider two identical particles: the properties of the system should stay the same if they are swapped, this means that the global wave function must in principle:
• stay the same
• only change the sign
If the wave function stays the same, there are no problems: the two identical particles can happily stay together and are called bosons. If the wave function changes the sign then we have a problem: we cannot tell which system has the particles swapped because they are identical, so we actually have two differently signed wave functions (which sum to null) for the same system. The solution is not allow such a system: identical particles whose exchange leads to change of sign of the wave function are not allowed to stay together, they are called fermions.
Nature chose electrons as fermions and so you cannot find two identical electrons in the same atom: each electron must have at least one property that allows to distinguish it from all the others, this is called Pauli exclusion principle. Each energy level (determined by the closure of the wave function as John Rennie explained) can hold a limited number of electrons which depends on the complexity of the level. The simplest level is just a sphere and does not offer any way to distinguish two electrons, so it can just hold... two! This is a tiny complication which comes from the spin: an intrinsic property that for electrons can be up or down allowing two of them with opposite spin to stay together on the same level.
share|improve this answer
I can sniff a lot of familiarity with the ''HOW'' answers from what I interpret about you from your post, so I'll only focus on the objective point - ''WHY''.
It turns out that it is possible to meaningfully describe nature by postulating that any object would tend to be in the minimum energy state possible under a given set of physical conditions. So, first we need an understanding of what these minimum energy configurations are - for which we treat the HOWS- (Schrodinger eqn etc.). But once we know what they are, the question is - how do they arrange themselves within these structures, which gets a common sense answer from the Aufbau principle, which is once again the reiteration of the same idea.
But what is special about this idea becomes clear if you start considering alternatives. Suppose, this wasn't the case, and we chose the stark opposite alternative - every object tended to occupy the most energetic state available (like some bouncing ball collisions in some video games), we would have a really tough time describing nature. For example, we wouldn't be able to explain why any system reaches an equilibrium at all, since e.g. it would be more favorable for an object once set in motion, to keep moving towards an unbounded maximum energy. Now, infinity isn't a number by definition, it reflects an unbounded maximum, so an ''inverted-scale'' description, with infinity in place of 0, is horribly inappropriate. e.g. zero is unique on the number line, but the functions $x$ and $x^2$ both increase indefinitely as we increase $x$. So, ''unbounded from above'' won't be a unique choice, and our description of nature won't be coherent. Anyways, it does so turn out from observational experience that things around us behave as if the underlying principle concerned a minimum, rather than a maximum. So, our postulate seems validated by nature.
Now, to specifically address the question of electronic arrangement, in e.g. Bohr atom, which would make a simple example, introducing the quantization conditions (as you are probably aware of, from your question.) Imagine it this way - if electron is getting attracted to the positively charged nucleus, it will tend to bump into it. However, it doesn't, because of its orbital angular momentum (let us neglect spin for the moment), which will cause it to rotate around the nucleus at some orbital radius, owing to a balance between the centripetal force and this attraction. However, while the electrostatic attraction is a continuous function of $r$, falling off as $1/r^2$, with the quantization condition, the angular momentum can't vary continuously, it grows in discrete units. Thus, the balance condition would now imply that all $r$ aren't allowed. You reach a balance at some fixed values of $r$, which define for you the shell locations (Of course the same condition also gives you the permissible energies.)
Now, here's the point (and here's how it relates to my -2 voted first paragraph): once you know the permitted energies, you need some filling up principle, and it most convenient to fill them up using our guiding principle of lowest energy first, rather than the other way round. If we filled them the other way round, we can never explain why there should be a hydrogen atom at all, since the very first electron would be sitting infinitely far away from the nucleus and would behave like an ionized, free electron.
share|improve this answer
Thank you. Voted this up, purely because you focused on the "why", thank you. Always appreciate non-traditional ways to look at a question. – PurposeNation Aug 4 '14 at 20:59
As soon as you add a second electron the energies of the allowed states change. Helium-like atoms and ions (those with 2 present electrons) are not Hydrogen-like atoms with an extra electron present: the energies, RMS radii etc of the shells all change. Adding still more electrons changes it some more. – dmckee Sep 7 '14 at 23:42
@dmckee - Irrespective of how (much) they change, you still fill them up according to lowest energy first. If I remember correctly what I learnt during UG, that's precisely the reason why in some particular case $4s$ got filled before $3d$. If you disagree, show me a counterexample (i.e. an instance where the higher energy state got occupied before a lower energy AVAILABLE one.) – The Dark Side Sep 8 '14 at 4:04
@dmckee - I stress on the ''available'' part - don't give me an example where an otherwise lower energy state got filled later because some conservation law or selection rule obstructed it. Obviously those are different cases. – The Dark Side Sep 8 '14 at 4:07
protected by ACuriousMind Apr 28 at 21:52
Would you like to answer one of these unanswered questions instead?
|
24e533cca420fa76 | Theoretical Concepts and Reaction Mechanisms
Yuri V. Il'ichev
Cordis Corporation, a Johnson and Johnson Company
P.O. Box 776, Welsh and McKean Roads, Spring House, PA 19477-0776
1. Chemistry of Electronically Excited States
Aren't you excited already? Not yet? Let us then adopt a step-by-step approach in order to introduce you to a fascinating world of excited-state reactions. The term photochemistry generally applies to chemical modifications induced by interaction of light (electromagnetic radiation) with matter. Therefore, light is always one of the reactants in a photochemical system. Electromagnetic radiation with the wavelength ranging from ~800 nm (near-IR) to ~150 nm (far UV) is of primary importance for photochemistry and photobiology, but the wavelength regions adjacent to this range are also of interest for certain applications. With the advent of lasers, multiphoton photochemistry, i.e. chemistry initiated by simultaneous absorption of two or more photons, came into wide use. This made IR radiation of particular interests for photochemists. The wavelength range of 150-800 nm corresponds to the photon energy ranging from 800 to 150 kJ mol-1 (Figure 1).
Figure 1
Figure 1. The electromagnetic spectrum.
These energies are much higher than those associated with thermal motion at ambient temperature and are comparable to the energies of chemical bonds. That is why photochemistry is often referred to as high-energy chemistry. The fact that the spectral region mentioned above contains electromagnetic radiation detectable by the human eye (visible light) suggests an interrelation of photochemistry and vision mechanisms. Humans can see radiation in this part of the spectrum because visual receptors are organic compounds that absorb light with these wavelengths. Notice that the spectral maximum of the solar radiation reaching the earth surface is located within the visible light range (~500 nm).
The basis for understanding light-matter interaction and chemical reactivity is quantum mechanics. According to this theory, a complete description of any molecular system can be provided by a function that is obtained upon solving the Schrödinger equation (a rather complex differential equation first introduced by Erwin Schrödinger). This function of multiple variables is called wavefunction. Generally, an infinite number of solutions (wavefunctions) with the corresponding values of the system energy are obtained from the Schrödinger equation. However, only some solutions with their characteristic values of the energy are physically acceptable. Thus, only certain values of the energy are allowed, although the number of these acceptable values can still be infinite. In other words, the energy of the molecular system is quantized.
Energy quantization is of primary importance for photochemistry, because it implies that only photons that bring a definite quantity of energy corresponding to a difference between two allowed energy values can be absorbed. The quantum mechanical results are conveniently illustrated by denoting discrete energy values according to a certain principle and plotting them on a graph with a vertical energy scale (compare to the Jablonski diagram in the module on Basic Photophysics). When a particular energy value, E, was designated with a certain symbol, e.g., S0, the system having the energy E is referred to as being in the state S0. Notice that this description may be incomplete because some systems may have the same energy, but be described by two different wavefunctions and, therefore, be in two different quantum states (this phenomenon is called degeneracy).
The application of quantum mechanics to molecular systems requires approximate methods because the Schrödinger equation cannot be solved exactly for many-body systems. Fortunately, nuclei are much heavier than the electrons, and consequently, their motion is much slower than that of the electrons. To a good approximation, the nuclei can be considered as fixed centers of potential and a description of the electronic motion can be obtained by solving the Schrödinger equation for a large number of different, but fixed nuclear positions. This way of separating electronic and nuclear motion is known as the Born-Oppenheimer approximation, or adiabatic approximation.
Generally, an infinite number of solutions (wavefunctions) with the corresponding electronic energies will be obtained for the electronic Schrödinger equation at each fixed nuclear configuration. The lowest electronic energy plotted against all internal variables (in general, 3N-6 for the system of N nuclei or 3N-5 for a linear N-atomic molecule) forms a multidimensional hypersurface corresponding to the ground state. This surface together with those corresponding to higher energies is referred to as the adiabatic potential energy surface. The electronic states are often designated according to the total spin (S or T for the singlet and triplet state with the spin 0 and 1, respectively, see also module on Basic Photophysics) and the relative energy (index "0" for the lowest energy, etc.). The majority of organic molecules are singlets in the lowest energy state, which is therefore referred to as the singlet ground state, or S0 state. Diatomic molecules have a single geometric parameter, internuclear distance, and therefore their potential energy surfaces reduce to curves. In this case a 2D-plot is sufficient for the presentation. Typical energy curves for the ground and first excited state are depicted in Figure 2.
Quantum mechanical treatment of nuclear motion within the Born-Oppenheimer approximation requires a solution of the nuclear Schrödinger equation with the electronic energy as the potential. Separation of different types of nuclear motion may often be achieved as a first approximation to complex molecular dynamics. This separation leads to several equations that are simpler than the original Schrödinger equation for the nuclei. There are three basic types of motion: translation, rotation, and vibration. Translation is the motion of the system as a whole, rotation is a motion in which the spatial orientation of the body changes, and vibration describe relative motion of the nuclei. Molecules moving freely in a macroscopic vessel may be treated as though their translational energy is not quantized. Another way of putting this is that the translational energy levels are so closely spaced that this type of motion may be well described with classical mechanics. Rotational and vibrational motion requires quantum mechanical treatment, which typically produces discrete energy levels such as shown in Figure 2.
Figure 2
Figure 2. Potential energy curves corresponding to the ground state (black, S0) and first excited state (blue, S1) of a diatomic molecule. The states were assumed to be of singlet multiplicity. The energy levels for the vibrational motion are shown as black and blue lines inside the curves. Red lines in the insert show rotational levels for the the zero vibrational level of S0. Notice that characteristic energy for the rotational motion is much smaller than that for the vibrational motion, and the latter is much smaller than the energy associated with electronic motion.
The potential energy surfaces obtained by solving the electronic Schrödinger equation provide the basis on which chemical reactivity can be analyzed. It is a standard practice in photochemistry to define a common ground-state surface for all molecular species of the same stoichiometry. Minima on this surface can be identified with the equilibrium structures (all isomers and/or intermolecular complexes with the same formula). Statistical mechanics provides an answer to the question how properties of a macroscopic system are related to those of molecules constituting the system. The answer is given in terms of probabilities to find a molecule in a particular microscopic state or, in other words, in terms of the population of molecular energy states (see Boltzmann distribution in Basic Photophysics).
For the vast majority of molecular systems at any reasonable temperature only the ground electronic state is populated. Therefore, thermal chemistry is almost exclusively governed by the properties of the ground state. Notice that some excited vibrational states have typically non-zero population and most molecules are in the excited rotational levels at ambient temperature.
In contrast, photochemistry can only be understood if one considers properties of excited electronic states, which are typically populated by light absorption. These facts together with a fundamental understanding of the quantum nature of light provide the basis for interpreting photochemistry, not so much as high-energy chemistry, which utilizes light merely as an energy source, but more as reactivity of electronically excited species. These species can and often do exhibit chemical properties that are largely different from those of the ground-state species. Properties of S0 surface and the two lowest excited-state surfaces of different multiplicity, S1 and T1, are of primary importance for photochemistry.
2. Photochemistry Laws
The first law of photochemistry states that only the light absorbed by a molecule can produce photochemical modification in the molecule. Here and below, the term "molecule" is broadly defined and includes also atoms, radicals, etc. The law emphasizes the importance of light absorption by the molecule involved in the primary photoprocess, which is a chemical reaction or a physical process involving directly excited species. All aspects and consequences of this law must be considered for quantitative analysis of a photoreaction. This is generally taken for granted, but the frequent practice of comparing photochemical kinetic traces for different molecules without referring to their absorbance suggest that it is ignored more often than one may assume.
The second law of photochemistry was formulated at the beginning of 20th century when the quantum theory was just emerging. It states that one molecule is excited for each quantum of radiation absorbed. In other words, the absorption of light by a molecule is a one-photon process (see Figure 3). Therefore for a primary photoprocess only one molecule reacts for each photon absorbed. Typically several competing processes occur in the excited state. In this case, the second law can be reformulated as: the sum of the quantum yields (defined in Section 3, below) for the primary processes must be unity.
It has taken about 20 years and the development of quantum mechanics to predict two-photon absorption (Figure 3). The first experimental observation of the two-photon absorption was made when lasers were developed. Further development in laser technology made almost routine the generation of ultrashort light pulses (10-12 - 10-15 s). Such ultrafast lasers made possible not only experimental study, but also the broad application of multiphoton processes. Multiphoton fluorescence is widely used in imaging of cells and biological tissues. Multiphoton photochemistry recently received attention as a tool for time-resolved studies of important biological processes.
Figure 3a Figure 3b
a b
Figure 3. (a) Schematic illustration of the light absorption by a rectangular sample. To a first approximation, molecules can be considered as opaque disks whose average cross-sectional area,Sigma, in cm2 molecule-1 represents the effective area that is impermeable for photons of a certain wavelength. We may consider an infinitesimal slab, dx, of a rectangular sample with a cross-section, S, which is equal to that of the light beam. The average intensity of light entering the slab is denoted I, and expressed in photon s-1. The intensity absorbed in the slab can be written as: Formula 1 , where N is the concentration in molecule cm-3. Integrating this equation from 0 to l (sample length in cm) we obtain the Beer-Lambert law for one-photon absorption: Formula 2 . If the concentration C is expressed in mol L-1 then the natural logarithm is usually substituted with the decimal one and the cross-section is replaced with the decimal molar absorptivity Formula 3.
Thus, we obtain: Formula 4.
(b) Energy diagrams for one- and two-photon absorption. The average rate of n-photon absorption per molecule in photons s1 molecule-1 can be approximated as: Formula 5, where Sigman is the cross-section of n-photon absorption, I is the average intensity in photon s-1, and S is the cross-section in cm2 of the laser beam entering the sample. For two-photon absorption the cross section, Sigma2, has dimension of cm4 s photon-1 molecule -1 and it is often expressed in GM, where
1 GM = 10-50 cm4 s photon-1 molecule-1. The unit was selected to honor Maria Göppert-Mayer who first predicted multiphoton absorption. The measured absorption rate Wn is the number of photons absorbed per s: Formula 6, where Vex is the excitation volume. For one-photon absorption we obtain: Formula 7. This expression corresponds to the Beer-Lambert law limit for low absorption: Formula 8.
3. Photochemical Kinetics
Quantum yield is the major characteristics of a photochemical reaction. The quantum yield, also called the quantum efficiency, is defined as the number of events occurring per photon absorbed. These events might be related to physical processes responsible for energy dissipation (such processes are discussed in Basic Photophysics), but they also might be related to molecules of a chemical product formed upon photoirradiation. Generally, the (total) quantum yield of a photoreaction, Phi, is:
Formula 9
Eq. (1) would define the quantum yield of product formation, Phip, if the number of product molecules would be determined. If the two numbers in Eq.(1) are measured per time and volume unit then the quantum yield is expressed in terms of rates (For more information on rate of reaction, see IUPAC Gold Book):
Formula 10
The latter quantity is also referred to as the differential quantum yield. Notice that these two definitions of the quantum yield agree only if the yield is constant during the course of the reaction. Eqs.(1) and (2) indicate that two separate measurements may be required to determine a quantum yield. In the simplest set-up, a reaction cell is mounted in a fixed position relative to the light source. The cell is charged with the sample of interest and irradiated. Photochemical conversion is determined with a suitable experimental technique (spectroscopy, chemical analysis, etc.). Afterwards, the cell is replaced with an actinometer, which is also irradiated. Before describing how actinometers work, it is important to say again that the amount of the radiation absorbed by the sample, rather than the total amount of light, has to be quantified.
An actinometer is a physical device or chemical system, which is used to determine the number of photons in a light beam. Physical devices convert the energy of absorbed photons into another energy form, which may be easily quantified. The devices that operate by converting photon energy into heat represent 'primary' standards of actinometry. Other physical devices and chemical systems must be calibrated. Chemical actinometers are photoreactive mixtures with well-established photochemistry and known quantum yields. Two representative systems for liquid-phase actinometry are potassium ferrioxalate system and azobenzene system. In both cases, the photoconversion is monitored spectrophotometrically. It is interesting that the most frequently used ferrioxalate system has relatively complex chemistry. Its description in the textbooks hardly goes beyond the statement that Fe(III) is reduced and oxalate is simultaneously oxidized upon photoirradiation. In contrast, the photochemistry of azobenzene is extremely simple (Scheme 1). The isomerization reaction proceeds cleanly in both directions and the solution may be regenerated and reused many times.
Scheme 1
Scheme 1
A chemical reaction is just one of multiple routes to the loss of excitation. The light absorption produces an excited-state species that inevitably loses its energy through various deactivation mechanisms. To highlight essential features of photochemical kinetics, we will analyze the simplest system with multiple pathways of deactivation that are characterized by the rate constants corresponding to unimolecular irreversible processes. In the present context, a clear-cut distinction between photophysical processes and a single photochemical reaction will be made. It is assumed that one-photon absorption leads to the direct population of the reactive singlet excited state. The mechanism described corresponds to a scheme shown in Scheme 2.
Scheme 2
Scheme 2. Kinetic scheme for a simple system with a photoreactive singlet state.
The rate constants kf , kic , and kisc refer to spontaneous emission, internal conversion and intersystem crossing, respectively. The rate constant kr corresponds to a chemical reaction. Assuming that population of S* by light absorption is characterized by the constant rate, W, in mol s-1, and the steady-state approximation (See IUPAC Gold Book) can be applied to the excited species we obtain the expression for the quantum yield of the photochemical reaction Phi0:
Formula 11
Formula 12
[Note: Eq.(3) was solved to obtain an expression for W, which was then used in Eq.(4).]
Quantum yields for the three other processes shown in Scheme 2 can be defined in the same way. Notice that the sum of all the quantum yields is equal to unity, as stated by the second law of photochemistry. The quantum yield for the photoreaction can be interpreted as the fraction of singlet excited molecules that undergo chemical transformation, i.e., the ratio of the number of molecules that react to the total number of S*. Because there always exists several routes to the loss of excitation, the quantum yield rather than the absolute rate constant must be used to compare the efficiencies of photochemical conversion for different reactive systems.
The steady-state approximation is inapplicable under conditions of time-dependent excitation. If a very short laser pulse is used to produce the excited species (so-called Delta-pulse excitation) the light absorption rate W can be neglected and Eq.(3) is easily integrated:
Formula 13
where Tau0 is the observed lifetime of the singlet excited state.
Formula 14
The observed lifetime is an average quantity defined for a large ensemble of the excited molecules. It can be measured with any experimental technique that is capable of detecting the excited species. To take an example, time-resolved fluorescence gives a convenient way of measuring Tau0 provided that certain experimental conditions are fulfilled. Fluorescence detection relies on the photocurrent signal, which is linearly proportional, within certain limits, to the total number of photons emitted. The number of photons, in its turn, is proportional to the number of molecules in the singlet excited state, because individual molecules have time-independent probability to emit light. The fluorescence intensity measured as a function of time depends therefore on the concentration of S*, which is given by Eq.(5). In contrast to the observed lifetime, the radiative lifetime Tauf = 1/kf corresponds to the fluorescence decay rate in the absence of any other deactivation processes.
As mentioned above, light should be considered as one of the reactants in photochemical reactions. Therefore, 'effective concentration' of photons, which is given as the number of light quanta absorbed by the photoreactant, needs to be specified when one compares concentration-time profiles for photochemical conversion. In contrast, the efficiency of a thermal reaction can be visualized by plotting the normalized concentration of the reactant or product against time. To clarify this point we need to analyze the photochemical kinetics in more detail. According to the reaction scheme shown in Scheme 2, the rate of the product formation is:
Formula 15
Inasmuch as concentrations are determined spechtrophotometrically, it is useful to rewrite Eq.(7) in terms of absorbances:
Formula 16
where Formula 17 and Formula 18 refer to absorbance at the irradiation and observation wavelength, respectively. The absorbance, Formula 19 , is measured at the observation wavelength and infinite time, i.e., after complete conversion to the product. In the case of very weak absorption (A<<1), Eq.(8) is easily integrated and experimental data are linearized in the coordinates corresponding to the following equation:
Formula 20
If we assume that the wavelength where only the reactant absorbs was selected for observation then absorbance in Eq.(9) can be replaced with the reactant concentration. Now a simpler equation, which looks very similar to the rate equation for the first-order thermal reaction can be obtained:
Formula 21
In contrast to thermal reactions, the proportionality coefficient Formula 22 in Eq.(10) is not just a rate constant independent of the initial concentration, but a complex quantity depending on three parameters. Therefore, the time dependence of the reactant concentration cannot be directly used to compare photoreactivity of different molecules, or even the same molecule if it was measured under different irradiation conditions. We could say that we need to know not only the reactant concentration but also effective 'light concentration',Formula 23 , in order to analyze photochemical systems. Even if we use the same light source for two systems we cannot directly compare results unless we know how much light was absorbed by each system. Figure 4 shows simulated concentration profiles for two systems that realize the same photochemical reaction S --> P, but differ in spectral parameters and quantum yields. This plot shows how misleading could be a comparison of relative concentrations plotted against time for photochemical reactions if the system is not completely specified.
Figure 4
Figure 4. Time profiles for the normalized concentrations of two compounds undergoing an irreversible first-order photoreaction with a quantum yield of 0.1 and 1.0. Solutions containing these compounds at the same initial concentrations were irradiated with the same mercury lamp equipped with a 365 nm narrow-band filter. Which line, blue or red, corresponds to the molecule with the higher quantum yield (more photoreactive)? This question can only be answered when the absorbances at the irradiation wavelength are compared (see the insert for the absorption spectra). The substance corresponding to the blue curve has 60 times larger absorbance at 365 nm, which is responsible for faster conversion despite the 10 times lower quantum yield for its photoreaction. As to the question, the correct answer is that the red line corresponds to the molecule with the photoreaction quantum yield of 1.0. However, an extremely weak absorption at 365 nm results in a relatively slow phototransformation of this compound.
In a typical photochemical experiment the concentration of the excited species and transients (S* and T* in Scheme 2) is negligible in comparison to that of the ground-state species (S and P). Assuming that T* is not reactive and initially we have only the reactant at the concentration [S]0 we can write:
[S]0 ≈ [P]+[S].
Assuming that light absorption by all transient can also be neglected we can rewrite Eq.(7) as follows:
Formula 24
where W([S],[P],t) is the rate of light absorption by the reactant which is a function of time t and concentrations both the reactant and product. To obtain the expression for W we will use the Beer-Lambert law (Figure 3), and the fact the absorbances of components in a mixture add up together:
Formula 25
Here I0 is the intensity of monochromatic light entering the sample expressed in mol L-1s-1, Formula 26 and Formula 27 are molar absorptivities of the reactant and product (M-1 cm-1), and l is the optical path (cm). The course of a photochemical reaction is often monitored spectrophotometrically at wavelength(s) different from the irradiation wavelength. By using the Beer-Lambert law we may write for the absorbance measured at the irradiation and observation wavelength at time t: Formula 28 and Formula 29. By using the absorbance, Formula 30, measured at the observation wavelength and infinite time and the three equations shown above we obtain Eq.(8). In the general case, Eq.(8) cannot be integrated. But it can be easily solved for very low and very high absorbance, and also for a special case when the product does not absorb at the irradiation wavelength, Formula 27 = 0. In the later case we obtain:
Formula 31
Formula 32
4. Theoretical Models of Photochemical Reactions
Within the Born-Oppenheimer approximation, potential energy surfaces govern nuclear motion and, therefore, chemical reactivity. However, in studying photochemistry it is also good to keep in mind that this is just an approximation, which is not automatically valid for all possible geometries and experimental conditions. A comprehensive picture of nuclear dynamics can be obtained from the time-dependent Schrödinger equation. However, a detailed account of nuclear motion can also be inferred from classical trajectories for a point moving without friction on the potential energy surface. The moving point may represent a chemically reactive system which consists of one or several molecular species. In the latter case one considers all reactants as a "supermolecule". The forces acting on the nuclei are given by minus the gradient of the potential (electronic energy) at this point. Recall that the gradient for a function of many variables is a vector formed by the first derivatives with respect to each of the variables.
Points on the surface that are characterized by the gradient vector of zero length are called stationary points. Their location is of primary importance for chemical reactivity. The nature of stationary points is determined by the secondary derivatives, the so-called Hessian matrix. If all the eigenvalues of this matrix are positive, the point is a minimum, which can be assigned to a reactant, product or intermediate. A first-order saddle point has all positive eigenvalues except for one, which is negative. It means that it is a maximum with respect to a single coordinate and a minimum in all other directions. Passage from one minimum to another one describes a chemical reaction and a saddle point between the two minima represents the transition state. Because of difficulties in representation of multidimensional hypersurfaces one-dimensional cross-sections through them are frequently used. The cross-sections may be compared to the potential energy curves of diatomic molecules and may often look similar to such curves. However, they must be interpreted with caution. For example, a saddle point may appear both as a minimum and maximum on two different cross-sections.
Thermal reactions are generally considered to be adiabatic, i.e., they are represented by the motion on the lowest potential energy surface. Another way of putting this is that these reactions occur exclusively in the ground state. Therefore, knowledge of the ground-state potential surface is sufficient for modeling thermal reactivity with reaction rate theories. In contrast, the theoretical treatment of any photochemical reaction requires information about potential energy surfaces for more than one state. The photoreaction starts from the ground state of the reactant(s), necessarily proceeds via electronically excited state(s), and ends with the product(s) in the ground state. Therefore, photochemical reactions inevitably include diabatic processes, i.e., a transition from one potential surface to another. This statement should illuminate the complexity of the theoretical analysis of photoreactions, especially because reliable calculations of the potential energy surfaces for electronically excited states of reasonably large molecules still represent a challenge for computational chemistry. Nevertheless, many fundamental aspects of complex photoinduced reactions still can be understood from qualitative analysis of potential energy surfaces.
Figure 5
Figure 5. Franck-Condon Principle. The vibrational functions of two electronic states are approximately harmonic oscillator-like functions. The most probable position of the nuclei in the ground state corresponds to the maximum of the probability distribution function for the zero level (red curve). The energy gap between vibrational levels is usually large enough so that population of excited levels is small. An electronic transition caused by light absorption is represented by a vertical line (block arrow). The highest probability of the transition corresponds to the largest overlap between the ground-state and excited-state vibrational wavefunctions. The overlap is greatest for the S1 vibrational level whose classical turning point is near the equilibrium distance of the ground state.
Upon light absorption, a molecular system may be transferred from the ground state to an electronically excited state. According to the Franck-Condon principle, this transition tends to occur between those vibrational levels of two electronic states that have the same nuclear configurations. The time required for the absorption of a light quantum (~1 fs) is much shorter than a characteristic time of a nuclear vibration (~100 fs), and therefore, the nuclei cannot change their relative positions during the act of excitation. In other words, transitions between two potential energy surfaces can be represented by vertical lines connecting them (see Figure 5). In the course of a photochemical reaction there is a considerable time interval when the molecular system is out of the thermal equilibrium (a few ps in condensed phase, up to ms in low pressure gas phase reactions). It means that the population of vibrational energy levels may differ strongly from that predicted by the Boltzmann distribution (see Basic Photophysics). As a consequence of "vertical" electronic transitions and different equilibrium geometries of the ground and first excited state (Figure 5), immediately after excitation the molecular system will likely be in an excited vibrational state ("hot" molecule).
The amount of extra energy available for nuclear motion is a function of the excitation energy (wavelength). Vibrational excitation may also result from internal conversion or intersystem crossing, when electronic energy is converted into kinetic energy of the nuclei. It is known that internal conversion from S1 to S0 can be so fast in some systems that the thermal equilibration is first achieved only in the ground state. In solution, "hot" molecules in the first excited or ground state are quickly cooled down via interactions with the surroundings. Thermal equilibrium is normally established within a few picoseconds. Nevertheless this time is long enough to comprise several vibrational periods. The excess of kinetic energy may help the reactant(s) to overcome the barrier and relax into a new minimum. Chemical reactions of this type are called "hot". They preferentially occur in the gas phase at lower pressure where molecular collision frequency is much smaller than in the condensed phase.
We have already discussed that theoretical analysis of thermal reactions can be accomplished when minima and saddle points on the ground-state surface are allocated. The situation is much more complex for photochemical reactions. Difficulties emerge when one needs to explore several potential energy surfaces in detail. Luckily, only a few excited-state surfaces are of importance for the majority of photoreactions. Even so, topology of the three surfaces, S0, S1 and T1, which are almost without exception needed to understand the photoreaction mechanism, may be extremely complex. Minima on S1 and T1 surfaces may be anticipated in the regions near the ground-state equilibrium geometries and near geometries, corresponding to intermolecular complexes. The latter minima reflect much larger polarizability of excited species and therefore higher affinity to other molecules. Excited complexes can be formed from two molecules of the same type (excimer), or from two different molecules (exciplex). Return from the minima of these two types to the ground state usually does not produce a chemical change (Figure 6a) unless significant geometrical changes accompany the excitation and/or multiple close-spaced minima exist on the ground state surface (Figure 6b). Formaldehyde provides an example for large geometrical distortions in the excited state, the molecule is planar in S0, and pyramidal in S1 and T1.
Figure 6
Figure 6. Schematic representation of the energy profiles corresponding to the ground state and the first excited state (a) for a system that undergoes an excited-state reaction but achieves no chemical conversion upon returning to the ground state and (b) for a system with partial conversion upon jumping to the ground state. Light absorption is represented by red block arrows, light emission by white block arrows.
In addition to localizing minima on the potential surfaces, finding the regions where the surfaces may cross or come very close to each other is of primary importance. The Born-Oppenheimer approximation is generally invalid in the vicinity of surface crossings and additional effects must be taken into account to describe the time evolution of the molecular species. The non-crossing rule states that potential energy curves can cross only if the electronic states have different symmetry (spatial or spin). Therefore the wavefunctions in the crossing region predicted by the simplest approximation has to be modified to avoid crossing of the potential energy curves (Figure 7). The non-crossing rule is strictly valid only for diatomic molecules. Intersection or touching of potential energy surfaces in polyatomic systems is generally allowed even if they belong to the states of the same symmetry. Recent studies showed that such crossing, also called conical intersection because of the topology of the surfaces at the crossing point, is quite common. The question whether a true conical intersection or avoided crossing is observed for a particular system of interest can be answered only with quantum mechanical calculations of high accuracy. These calculations recently became feasible for relatively large organic molecules, but reliable data are available just for a few systems.
Figure 7
Figure 7. Adiabatic (solid) and non-adiabatic energy curves (dashed) for the S0 and S1 states. The light absorption is a vertical transition (block red arrow). Nuclear motion after excitation is governed by the S1 curve. Blue arrows show the motion in the case of avoided crossing and the black broken arrow corresponds to the allowed crossing.
Two hypothetical surfaces for the ground- and an excited state are depicted in Figure 8. The fact that multidimensional potential-energy surfaces may have numerous regions where they come very close to each other is of great importance for understanding photochemical mechanisms. First, non-radiative transitions such as internal conversion and intersystem crossing have much higher probability in these regions. Second, conical intersections (or weakly avoided crossings) serve as bottlenecks through which the photoreaction passes on the way from excited-state species to the ground-state products. In this sense crossing points are analogous to the transition states on the adiabatic surfaces. An essential distinct feature of the conical intersection is the presence of two independent pathways for the reaction (path f) as compared to the single path through the saddle point.
Figure 8
Figure 8. Potential energy surfaces of the ground and an excited state with various pathways (dashed lines) following the light absorption (red arrow).
"Vertical" excitation typically leads to vibrationally excited species. Thermal equilibrium may be established during the lifetime of the excited state, meaning that vibrational relaxation takes place and the photoreaction starting from a minimum on the excited-state surface is said to have an excited-state intermediate (path a). Return from the first or even the second minimum reached on the excited-state surface often does not produce a new species (right part of path c) and the whole sequence may be considered as a photophysical process. A typical example is the protolytic dissociation of 1-naphthol in the singlet excited state (Scheme 3). The acidity of this molecule increases dramatically upon excitation (pKa = 9.2 and 0.4 for S0 and S1) and proton is transferred to a suitable acceptor such as water. It has to be noted that Scheme 3 does not account for all photoprocesses occurring in
1-naphthol solutions.
Scheme 3
Scheme 3
The primary excited-state intermediate in Figure 8 may produce a new molecule in the excited state, which undergoes further modifications (path b), or returns to a new minimum on the ground-state surface (left part of path c). A jump from the excited-state surface can be accomplished via non-radiative transition (path c) or light emission
(path d). An illustrative example of an excited-state intermediate in the photochemical reaction is the interaction of 9-cyanophenathrene with tetramethylethene in benzene that forms a cycloadduct via a singlet exciplex (Scheme 4).
Scheme 4
Scheme 4
The reaction sequences represented by motion on the excited-state adiabatic surface are usually called adiabatic reactions. If the loss of excitation occurs anywhere on the reaction path between the points corresponding to reactants and products, then such photoreaction may be referred to as diabatic (also called non-adiabatic). It is also possible that the vibrational relaxation first occurs in the ground state (path e in Figure 8). Such a photoreaction is called "direct". A direct reaction proceeds through a funnel (path f), which is a region of the potential energy surface where the probability for a jump from one energy surface to another one is very high. Funnels usually correspond to conical intersections or weakly avoided crossings. To characterize a molecule in a funnel one needs not only the positions of the nuclei but also their velocity vectors. In some systems passage through a conical intersection may also be separated from the excited-state minimum initially populated by a small barrier (paths a and c assuming that surfaces now cross at the point corresponding to path c). The presence of a S1-S0 conical intersection separated from the "vertical" geometry by a small barrier has been predicted for benzene. This funnel is responsible for the opening of efficient deactivation channel leading to disappearance of fluorescence and isomerization (Scheme 5) when the benzene molecule has enough vibrational energy to overcome the barrier.
Scheme 5
Scheme 5
5. Factors Determining Outcome of a Photochemical Reaction
The wide variety of molecular mechanisms of photochemical reactions makes a general discussion of such factors very difficult. The chemical nature of the reactant(s) is definitely among the most important factors determining chemical reactivity initiated by light. However, a better understanding of this aspect may be gained from a closer examination of the individual groups of chemical compounds. The nature of excited states involved in a photoreaction is directly related to the electronic structure of the reactant(s).
Environmental variables, i.e., parameters that are not directly related to the chemical nature of the reacting systems, may also strongly affect photochemical reactivity. It is useful to distinguish between variables that are common for thermal and photochemical reactions, and those that are specific for the reactions of excited species. The first group includes reaction medium, reaction mixture composition, temperature, isotope effects to name the most important. The distinctive feature of photochemical reactions is that these parameters almost always operate under conditions when one or more photophysical processes compete with a photoreaction. The result of a photoinduced transformation can only be understood as the interplay of several processes corresponding to passages on and between at least two potential energy surfaces. We saw that even the simplest system, shown in Scheme 2, corresponds to parallel reactions in terms of reaction kinetics.
Reaction medium may directly modify the potential energy surfaces of the ground and excited states and hence affect the photoreactivity. The outcome of the two reactions presented in Schemes 3 and 4 changes dramatically when solvent polarity and hydrogen bonding capacity are changed. The protolytic photodissociation of 1-naphthol is completely suppressed in aprotic solvents because of unfavorable solvation energies both for the anion and proton. Under such conditions, proton transfer reaction cannot compete with the deactivation. The formation of two new products (Scheme 6) in the reaction of 9-cyanophenathrene with tetramethylethene is observed in methanol, because the exciplex dissociated into radical ions. It means that the potential energy minimum corresponding to the ion-radical pair shifts below that of the exciplex in polar solvents. The ion-radical formation is often followed by proton transfer reactions.
Scheme 6
Scheme 6
Solvent viscosity will strongly affect photoreactions where the encounter of two reactants or a substantial structural change are required. In highly viscous or solid solutions the loss of excitation via light emission or unimolecular non-radiative deactivation is more probable than a chemical modification of the excited species. On the other hand, slow diffusion in viscous solutions may prevent self-deactivation of the triplet state via a bimolecular process called triplet-triplet annihilation and enhance the efficiency of a photoreaction from this state. Triplet-triplet annihilation belongs to electronic-energy transfer processes, which may be classified as quenching of excited states. Quenching rate is a very important factor in discussing effects of medium and reaction mixture composition on photoreactivity. Quenching of excited states is a general phenomenon that is realized via different mechanisms. Any process that leads to the disappearance of the excited state of interest may be considered as quenching. In general it can be represented as:
Scheme 7
Scheme 7
Notice that the quencher molecule Q may belong to the same kind of chemical species as the excited molecule, and be either in the ground or in an excited state. S' corresponds to the ground state or to an excited state of lower energy. For the purpose of our discussion we separated quenching described by Scheme 7 from all other processes, including the photoreaction of interest introduced in Scheme 2. Obviously, this separation is just a matter of convention. Generally, any chemical reaction of the excited species can be considered as a quenching process for fluorescence. Scheme 7 can easily be incorporated into the reaction scheme (see Scheme 8) and into our kinetic analysis as an additional pseudo-unimolecular rate constant kq[Q].
Scheme 8
Scheme 8. Kinetic scheme for a simple system with a photoreactive singlet state in the presence of a quencher.
In the presence of a quencher, Q, the observed lifetime of the excited molecule and therefore the quantum yield of the photoreaction may be significantly reduced.
(11) Formula 33
(12) Formula 34
Eq. (4), (6), (11), and (12) can be combined into a single one:
(13) Formula 35
where index "0" refers to the system without quenching. If we would consider the fluorescence quantum yield instead of the photoreaction yield, we would obtain a similar equation, which is known as the Stern-Volmer equation. The mechanism just considered corresponds to so-called dynamic quenching that results purely from encounters between excited molecules and the quencher. It is also conceivable that Q and S form a ground-state complex, which has a different reactivity and/or does not fluoresce. This situation is referred to as static quenching. In the case of static quenching, the quantum yield is diminished but the observed lifetime remains constant. In any event, the existence of quenching emphasizes the importance of the concentration as a controlling factor in photochemistry. For many systems, the quenching rate constant, kq, is close to the diffusion-controlled limit which is of the order 1010 M-1s-1 at ambient temperature in liquid solutions. It means that quenching effects may become noticeable at the quencher concentrations > 1 mM and > 1 µM for the singlet state and triplet state with characteristic lifetimes of 10 ns and 10 µs, respectively. Thus even minor impurities may cause photoreaction quenching. The concentration of the photoreactive compound S may also play an important role if self-quenching takes place.
Because of the energy conservation law the excitation energy in a quenching process must be either dissipated in the form of thermal energy, or accumulated in the form of chemical energy of the quenching products or transferred to the quencher Q. According to these three possibilities one may distinguish physical mechanisms of quenching from chemical ones and from energy transfer. However, a clear cut is not always possible or worth making. The formation of excimers is frequently observed in solutions of aromatic hydrocarbons, such as anthracene or pyrene. The potential energy surfaces in these systems frequently look similar to that shown in Figure 6a. Thus, the entire reaction sequence leads only to quenching of the excited monomer. The quenching will be seen in a reduced quantum yield of the monomer fluorescence and a monomer photoreaction. An illustrative example is
1-hydroxypyrene, which is a moderately strong photoacid in water. In the singlet excited states it readily transfers a proton to a suitable base such as acetate anion (analogous to the reaction shown in Scheme 3). But at higher concentrations of 1-hydroxypyrene, the quantum yield of the photoinduced proton transfer decreases because of the formation of the excimer, which is not as efficient as the proton donor. Exciplexes are typically more reactive, and provide examples for combined physical and chemical quenching (see Schemes 4 and 6).
Fluorescence self-quenching in aqueous solutions of dyes, such as fluorescein or eosin, has been known for more than 100 years. Several mechanisms involving collisional quenching, ground-state aggregation and energy transfer to the aggregates has been proposed to account for this phenomenon. In principle, quenching by the ground state could be observed for almost every excited species under conditions favoring the close proximity of two molecules. That is why it is often reported for systems with confined geometries such as those of surfactant assemblies. There exist many examples of self-quenching of the triplet state that plays a role in photochemistry. For example, the quenching of anthrone triplets by its ground state in benzene occurs with a rate constant close to 109 M-1s-1 and results in the formation of two radicals (Scheme 9). The photoreactivity of 10,10-dimethylanthrone differs dramatically, because methyl substituents prevent the reactive self-quenching.
Scheme 9
Scheme 9
Compounds with heavy atoms and paramagnetic species increase the rate of intersystem crossing. It has to be emphasized that such molecules enhance the efficiency both of the S1 --> T1 and T1 --> S0 transitions, and should be considered as quenchers both for singlets and triplets. The yields of photochemical reactions originating from the singlet excited state, as a rule, are adversely affected by these quenchers. In contrast, the efficiency of photoconversion from the triplet state is usually increased because the triplet lifetime remains sufficiently long in the presence of a quencher, and the overall effects is largely determined by an increase in the yield of the triplets. An example is given by the photoreaction of anthracene with 1,3-cyclohexadiene which mainly forms product A (Scheme 10). In the presence of methyl iodide (iodine is a heavy atom), the major product is compound B, which was also obtained in small quantities in the absence of the quencher. The results suggest that B is formed in a triplet-state reaction.
Scheme 10
Scheme 10
The most important paramagnetic species is molecular oxygen, which is known to be very efficient quencher of excited states. Quenching by O2 is particularly important for the triplet state because of its long lifetime (see Eq.(13)), so that even traces of oxygen may strongly affect photoreaction occurring through the triplet state. The ground state of O2 is a triplet state. The first singlet excited state is only 22 kcal mol-1 above the ground state. This energy corresponds to near-IR radiation with a wavenumber of 7882.4 cm-1 or wavelength of 1269 nm. Singlet oxygen is a reactive species interacting with a wide variety of substrates. It can be generated using dyes with a high triplet yield, such as rose bengal or methylene blue. As mentioned above this process belongs to the type of quenching that is called electronic energy transfer.
The outcome of an energy-transfer process is the quenching of the luminescence or photoreaction associated with the donor and the initiation of the luminescence or photoreaction characteristic of the energy acceptor. The subsequent reactions of the acceptor are said to be sensitized. Electronic energy transfer can be described by Scheme 7, where Q' has to be an excited-state species. Two general mechanisms of the energy transfer are distinguished: radiative and nonradiative. The radiative mechanism, often described as "trivial", is realized through the emission of light by the donor, and its absorption by the acceptor. Nonradiative energy transfer is a single-step process that requires the direct interaction of the donor and acceptor.
The specific variables of any photoreaction, as compared to thermal chemical processes, are the wavelength and intensity of excitation light. Wavelength dependence of the quantum yield or photoproduct composition may result from the occurrence of "hot" reactions or reactions from higher excited states (S2, T2, etc.). The latter processes have to be extremely fast to compete with internal conversion, which is typically accomplished within 1 ps. The presence of slowly inter-converting conformers or isomers with different absorption spectra may also cause wavelength-dependent photoreactions.
The intensity of excitation light is the key to multi-photon photochemistry (see legend to Figure 3). Because of the n-th power dependence of the absorption rate the photoreaction may become detectable only when the photon flux is above a certain threshold value. In one-photon photoreactions, primary processes are normally not affected by the light intensity. However, the overall reaction might be very sensitive to it, because relatively long-lived intermediates may come into play. These intermediates may absorb light and undergo photochemical reactions, or be involved in bimolecular reactions (e.g., triplet-triplet annihilation) that are strongly dependent on their concentration, and therefore light intensity.
6. Exciting World of Photoreactions
There exist a plethora of photoreactions practically for each class of chemical compounds. These reactions may be categorized according to chemical composition and structure. They may also be classified under different types by using theoretical models for the description of the excited state(s) or structure of the potential energy surface. However, for our introductory discussion it seems to be more appropriate just to consider some examples classified by general reaction types (Figure 9).
Figure 9
Figure 9. Multiple reaction pathways for electronically excited species.
Considering the high energies involved in electronic excitation, photoinduced dissociation may be expected as a typical reaction pathway. However, photodecomposition via dissociative pathway(s) is not so common in photochemistry, particularly for large molecules in solution. This can be easily understood if one recalls that electronic excitation is usually not localized in a particular vibrational mode, and primary products of the dissociation have a high probability to recombine, because of cage effects. In solution, two fragments of a molecule are trapped within a "cage" of solvent molecules, and undergo numerous collisions before they escape the cage. Dissociative processes play a much more important role in gas-phase photochemistry. The photodissociation of small molecules driven by the UV radiation is of profound importance for atmospheric photochemistry.
Photodecomposition of O2 and O3 (Scheme 11) may afford the products in different electronic states depending on the excitation wavelength. These processes participate in establishing peculiar profiles for atmospheric temperature and the solar radiation spectrum on the earth surface. The gas-phase photoionization (removal of an electron) also belongs to dissociation reactions. The reactions shown in Scheme 12 may occur in the upper atmosphere due to short-wavelength UV radiation from the Sun.
Scheme 11
Scheme 11
Scheme 12
Scheme 12
An important primary photoprocess of carbonyl compounds is alpha cleavage, also known as a Norrish Type I reaction (Scheme 13). Besides recombination, the acyl and the alkyl radicals formed in the primary reaction can undergo numerous secondary reactions that are responsible for the multitude of final products.
Scheme 13
Scheme 13
Rearrangements of electronically excited molecules present one of the most exciting chapters in photochemistry in the sense that they follow reaction pathways that are usually inaccessible for the ground state (activation barriers in the ground state are very high). The cis-trans isomerization of double bonds belongs to such reactions. The azobenzene reaction depicted in Scheme 1 provides an instructive example. Scheme 14 shows photoinduced rearrangements of stilbene that has been extensively studied. In addition to double bond isomerization, cis-stilbene undergoes also cyclization with a lower quantum yield to form dihydrophenanthrene. The cis-trans isomerization of stilbene occurs through rotation around the double bond. In the ground state this rotation encounters a large barrier, i.e., there is a maximum on the ground-state potential energy surface at the geometry corresponding to a twist angle of about 90o. In contrast, both the first singlet excited state and triplet state have a minimum approximately at the same geometry. The close proximity of the minimum and maximum facilitates a jump to the ground state (compare to path c in Figure 8). The cis-trans isomerization of azobenzene may proceeds not only through rotation, but also through nitrogen inversion, i.e. in-plane motion of the phenyl ring.
Scheme 14
Scheme 14
Two illuminating examples of photoinduced rearrangements of substituted benzaldehydes are presented in Scheme 15. Intramolecular hydrogen transfer in 2-hydroxybenzaldehyde is an extremely fast reaction in the singlet excited state. However, the process is completely reversed upon a jump to the ground state. Overall, no chemical conversion is observed and excitation energy is either dissipated as heat or emitted as light, but with a longer wavelength (see Figure 6a). This behavior is typical for aromatic carbonyl compounds with ortho-hydroxy groups, and they found application as UV protectors, in sunscreens for example. Molecules acting as UV protectors absorb light that is harmful for biological molecules, and convert light into heat or radiation that is biologically benign. In contrast, an intramolecular hydrogen transfer in
2-nitrobenzaldehyde initiates a sequence of the ground-state reactions that leads to 2-nitrosobenzoic acid. The latter molecule is a moderately strong acid, and dissociates in aqueous solutions so that the photochemistry of 2-nitrobenzaldehyde can be used to create a rapid pH-jump in solution. Many biological macromolecules, such as proteins and nucleic acids, show pH-dependent conformational changes. Those changes can be monitored in real time by using the light-induced
Scheme 15
Scheme 15
Scheme 9 gives an example of photoinduced abstraction. The two reactions shown in Schemes 3 and 6 can also be classified as abstraction reactions. Here, a proton or an electron is abstracted from the excited molecule by the ground-state species. These processes, often combined under the term "charge-transfer reactions", play an important role in many photoinduced processes. Excited-state electron transfer constitutes a decisive step in the entire process of photosynthesis. Hydrogen atom abstraction reactions are known for more than 100 years and belong to the most extensively studied photoprocesses. The reaction of benzophenone in the triplet excited state with isopropanol provides another example for this type of photoreactions (Scheme 16). The dimethylketyl radical produced transfers a hydrogen atom to benzophenone in the ground state to produce another diphenylketyl radical. It is interesting that only one photon is needed to convert two molecules of the reactant and the quantum yield of benzophenone decomposition has a limiting value of 2.
Scheme 16
Scheme 16
Intramolecular hydrogen abstraction is a common photoreaction of carbonyl compounds with a hydrogen atom attached to the fourth carbon atom (Scheme 17). The resulting diradical can form cycloalkanol or undergo C-C bond fission to give an alkene and enol. The latter is usually thermodynamically unfavorable and converts to a ketone. Intramolecular abstraction of a -hydrogen is known as a Norrish Type II process.
Scheme 17
Scheme 17
Photosubstitution reactions are well characterized for substituted aromatic compounds. An illustrative example is the photoreaction of m-nitroanisole with cyanide ion (Scheme 18). The mechanism involves a complex of the aromatic molecule in the triplet state with the nucleophile.
Scheme 18
Scheme 18
A photohydrolysis reaction in aqueous solution (substitution with OH-) was utilized to provide the rapid light-controlled release of biologically active molecules, such as aminoacids, nucleotides, etc. Biologically inert compounds affording such release upon photoirradiation are referred to as "caged" compounds. Two-photon photochemistry is of great interest for such studies, because one can utilize red light or IR radiation, which is not absorbed by biomolecules, and is biologically benign. The two-photon photohydrolysis of the glutamate ester of hydroxycoumarin (Scheme 19) is characterized by a reasonably high cross-section for two-photon absorption.
Scheme 19
Scheme 19
Addition reactions are quite common among electronically excited molecules. An example of cycloaddition occurring both from the singlet and triplet excited state is shown in Scheme 10. Photoinitiated cycloaddition reactions are of great importance for understanding the mutagenic effects of UV radiation. Two major photolesions produced in DNA by UV light are cyclobutane pyrimidine dimers (CPD) and pyrimidine(6-4)pyrimidone adducts (P64P) (Scheme 20). These lesions are thought to represent the predominant forms of premutagenic damage. Generally, the overall yield of P64P is substantially lower than that for CPD, but CPD was found to be less mutagenic than the P64P adduct. CPD is formed in cycloaddition involving excited thymine or cytosine and another pyrimidine nucleobase in the ground state. The proposed, but still not proven mechanism for the P64P formation includes an unstable intermediate with a 4-membered ring, which undergoes fast H-transfer and ring opening.
Scheme 20
Scheme 20
The addition of singlet oxygen to double bonds is well known. Because singlet oxygen can be generated photochemically via energy transfer, the entire reaction sequence, such as shown in Scheme 21, provides an example of sensitized addition photoreaction.
Scheme 21
Scheme 21
7. Supplemental Reading
Barltrop, J.A., Coyl, J.D. (1975) Excited states in organic chemistry, London ; New York: Wiley, 1975, 376 p.
Klessinger, M., Michl J. (1995) Excited states and photochemistry of organic molecules, New York: Wiley-VCH Publishers, 538 p.
Michl, J., Bonacic-Koutecky, V. (1990) Electronic aspects of organic photochemistry, New York: Wiley, 475 p.
Turro, N.J. (1991) Modern Molecular Photochemistry, Sausalito: University Science, 628 p.
Wayne, C.E., Wayne, R.P. (1996) Photochemistry, Oxford: Oxford University Press, 96 p.
[ TOP ] |
8cd1fc87fef9ed4f | Mathematical analysis
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A strange attractor arising from a differential equation. Differential equations are an important area of mathematical analysis with many applications to science and engineering.
Mathematical analysis is a branch of mathematics dealing with limits and related theories, such as differentiation, integration, measure, infinite series, and analytic functions.[1][2]
These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. Analysis may be distinguished from geometry; however, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space).
Mathematical analysis formally developed in the 17th century during the Scientific Revolution,[3] but many of its ideas can be traced back to earlier mathematicians. Early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, an infinite geometric sum is implicit in Zeno's paradox of the dichotomy.[4] Later, Greek mathematicians such as Eudoxus and Archimedes made more explicit, but informal, use of the concepts of limits and convergence when they used the method of exhaustion to compute the area and volume of regions and solids.[5] The explicit use of infinitesimals appears in Archimedes' The Method of Mechanical Theorems, a work rediscovered in the 20th century.[6] In Asia, the Chinese mathematician Liu Hui used the method of exhaustion in the 3rd century AD to find the area of a circle.[7] Zu Chongzhi established a method that would later be called Cavalieri's principle to find the volume of a sphere in the 5th century.[8] The Indian mathematician Bhāskara II gave examples of the derivative and used what is now known as Rolle's theorem in the 12th century.[9]
In the 14th century, Madhava of Sangamagrama developed infinite series expansions, like the power series and the Taylor series, of functions such as sine, cosine, tangent and arctangent.[10] Alongside his development of the Taylor series of the trigonometric functions, he also estimated the magnitude of the error terms created by truncating these series and gave a rational approximation of an infinite series. His followers at the Kerala school of astronomy and mathematics further expanded his works, up to the 16th century.
The modern foundations of mathematical analysis were established in 17th century Europe.[3] Descartes and Fermat independently developed analytic geometry, and a few decades later Newton and Leibniz independently developed infinitesimal calculus, which grew, with the stimulus of applied work that continued through the 18th century, into analysis topics such as the calculus of variations, ordinary and partial differential equations, Fourier analysis, and generating functions. During this period, calculus techniques were applied to approximate discrete problems by continuous ones.
In the 18th century, Euler introduced the notion of mathematical function.[11] Real analysis began to emerge as an independent subject when Bernard Bolzano introduced the modern definition of continuity in 1816,[12] but Bolzano's work did not become widely known until the 1870s. In 1821, Cauchy began to put calculus on a firm logical foundation by rejecting the principle of the generality of algebra widely used in earlier work, particularly by Euler. Instead, Cauchy formulated calculus in terms of geometric ideas and infinitesimals. Thus, his definition of continuity required an infinitesimal change in x to correspond to an infinitesimal change in y. He also introduced the concept of the Cauchy sequence, and started the formal theory of complex analysis. Poisson, Liouville, Fourier and others studied partial differential equations and harmonic analysis. The contributions of these mathematicians and others, such as Weierstrass, developed the (ε, δ)-definition of limit approach, thus founding the modern field of mathematical analysis.
In the middle of the 19th century Riemann introduced his theory of integration. The last third of the century saw the arithmetization of analysis by Weierstrass, who thought that geometric reasoning was inherently misleading, and introduced the "epsilon-delta" definition of limit. Then, mathematicians started worrying that they were assuming the existence of a continuum of real numbers without proof. Dedekind then constructed the real numbers by Dedekind cuts, in which irrational numbers are formally defined, which serve to fill the "gaps" between rational numbers, thereby creating a complete set: the continuum of real numbers, which had already been developed by Simon Stevin in terms of decimal expansions. Around that time, the attempts to refine the theorems of Riemann integration led to the study of the "size" of the set of discontinuities of real functions.
Also, "monsters" (nowhere continuous functions, continuous but nowhere differentiable functions, space-filling curves) began to be investigated. In this context, Jordan developed his theory of measure, Cantor developed what is now called naive set theory, and Baire proved the Baire category theorem. In the early 20th century, calculus was formalized using an axiomatic set theory. Lebesgue solved the problem of measure, and Hilbert introduced Hilbert spaces to solve integral equations. The idea of normed vector space was in the air, and in the 1920s Banach created functional analysis.
Important concepts[edit]
Metric spaces[edit]
Main article: Metric space
In mathematics, a metric space is a set where a notion of distance (called a metric) between elements of the set is defined.
Much of analysis happens in some metric space; the most commonly used are the real line, the complex plane, Euclidean space, other vector spaces, and the integers. Examples of analysis without a metric include measure theory (which describes size rather than distance) and functional analysis (which studies topological vector spaces that need not have any sense of distance).
Formally, a metric space is an ordered pair where is a set and is a metric on , i.e., a function
such that for any , the following holds:
1. if and only if (identity of indiscernibles),
2. (symmetry) and
3. (triangle inequality) .
By taking the third property and letting , it can be shown that (non-negative).
Sequences and limits[edit]
Main article: Sequence
A sequence is an ordered list. Like a set, it contains members (also called elements, or terms). Unlike a set, order matters, and exactly the same elements can appear multiple times at different positions in the sequence. Most precisely, a sequence can be defined as a function whose domain is a countable totally ordered set, such as the natural numbers.
One of the most important properties of a sequence is convergence. Informally, a sequence converges if it has a limit. Continuing informally, a (singly-infinite) sequence has a limit if it approaches some point x, called the limit, as n becomes very large. That is, for an abstract sequence (an) (with n running from 1 to infinity understood) the distance between an and x approaches 0 as n → ∞, denoted
Main branches[edit]
Real analysis[edit]
Main article: Real analysis
Real analysis (traditionally, the theory of functions of a real variable) is a branch of mathematical analysis dealing with the real numbers and real-valued functions of a real variable.[13][14] In particular, it deals with the analytic properties of real functions and sequences, including convergence and limits of sequences of real numbers, the calculus of the real numbers, and continuity, smoothness and related properties of real-valued functions.
Complex analysis[edit]
Main article: Complex analysis
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers.[15] It is useful in many branches of mathematics, including algebraic geometry, number theory, applied mathematics; as well as in physics, including hydrodynamics, thermodynamics, mechanical engineering, electrical engineering, and particularly, quantum field theory.
Complex analysis is particularly concerned with the analytic functions of complex variables (or, more generally, meromorphic functions). Because the separate real and imaginary parts of any analytic function must satisfy Laplace's equation, complex analysis is widely applicable to two-dimensional problems in physics.
Functional analysis[edit]
Main article: Functional analysis
Differential equations[edit]
A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders.[18][19][20] Differential equations play a prominent role in engineering, physics, economics, biology, and other disciplines.
Differential equations arise in many areas of science and technology, specifically whenever a deterministic relation involving some continuously varying quantities (modeled by functions) and their rates of change in space and/or time (expressed as derivatives) is known or postulated. This is illustrated in classical mechanics, where the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow one (given the position, velocity, acceleration and various forces acting on the body) to express these variables dynamically as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation (called an equation of motion) may be solved explicitly.
Measure theory[edit]
Main article: Measure (mathematics)
A measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size.[21] In this sense, a measure is a generalization of the concepts of length, area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the -dimensional Euclidean space . For instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically, 1.
Numerical analysis[edit]
Main article: Numerical analysis
Other topics in mathematical analysis[edit]
Techniques from analysis are also found in other areas such as:
Physical sciences[edit]
The vast majority of classical mechanics, relativity, and quantum mechanics is based on applied analysis, and differential equations in particular. Examples of important differential equations include Newton's second law, the Schrödinger equation, and the Einstein field equations.
Functional analysis is also a major factor in quantum mechanics.
Signal processing[edit]
When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate individual components of a compound waveform, concentrating them for easier detection and/or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.[24]
Other areas of mathematics[edit]
Techniques from analysis are used in many areas of mathematics, including:
See also[edit]
1. ^ Edwin Hewitt and Karl Stromberg, "Real and Abstract Analysis", Springer-Verlag, 1965
2. ^ Stillwell, John Colin. "analysis | mathematics". Encyclopedia Britannica. Retrieved 2015-07-31.
3. ^ a b Jahnke, Hans Niels (2003). A History of Analysis. American Mathematical Society. p. 7. ISBN 978-0-8218-2623-2.
4. ^ Stillwell (2004). "Infinite Series". Mathematics and its History (2nd ed.). Springer Science + Business Media Inc. p. 170. ISBN 0-387-95336-1. Infinite series were present in Greek mathematics, [...] There is no question that Zeno's paradox of the dichotomy (Section 4.1), for example, concerns the decomposition of the number 1 into the infinite series 12 + 122 + 123 + 124 + ... and that Archimedes found the area of the parabolic segment (Section 4.4) essentially by summing the infinite series 1 + 14 + 142 + 143 + ... = 43. Both these examples are special cases of the result we express as summation of a geometric series
5. ^ (Smith, 1958)
6. ^ Pinto, J. Sousa (2004). Infinitesimal Methods of Mathematical Analysis. Horwood Publishing. p. 8. ISBN 978-1-898563-99-0.
7. ^ Dun, Liu; Fan, Dainian; Cohen, Robert Sonné (1966). "A comparison of Archimedes' and Liu Hui's studies of circles". Chinese studies in the history and philosophy of science and technology. 130. Springer: 279. ISBN 0-7923-3463-9. , Chapter , p. 279
8. ^ Zill, Dennis G.; Wright, Scott; Wright, Warren S. (2009). Calculus: Early Transcendentals (3 ed.). Jones & Bartlett Learning. p. xxvii. ISBN 0-7637-5995-3. Extract of page 27
9. ^ Seal, Sir Brajendranath (1915), The positive sciences of the ancient Hindus, Longmans, Green and co.
10. ^ C. T. Rajagopal and M. S. Rangachari (June 1978). "On an untapped source of medieval Keralese Mathematics". Archive for History of Exact Sciences. 18 (2): 89–102. doi:10.1007/BF00348142.
11. ^ Dunham, William (1999). Euler: The Master of Us All. The Mathematical Association of America. p. 17.
12. ^ *Cooke, Roger (1997). "Beyond the Calculus". The History of Mathematics: A Brief Course. Wiley-Interscience. p. 379. ISBN 0-471-18082-3. Real analysis began its growth as an independent subject with the introduction of the modern definition of continuity in 1816 by the Czech mathematician Bernard Bolzano (1781–1848)
13. ^ Rudin, Walter. Principles of Mathematical Analysis. Walter Rudin Student Series in Advanced Mathematics (3rd ed.). McGraw–Hill. ISBN 978-0-07-054235-8.
14. ^ Abbott, Stephen (2001). Understanding Analysis. Undergraduate Texts in Mathematics. New York: Springer-Verlag. ISBN 0-387-95060-5.
15. ^ Ahlfors, L. (1979). Complex Analysis (3rd ed.). New York: McGraw-Hill. ISBN 0-07-000657-1.
16. ^ Rudin, W. (1991). Functional Analysis. McGraw-Hill Science. ISBN 0-07-054236-8.
17. ^ Conway, J. B. (1994). A Course in Functional Analysis (2nd ed.). Springer-Verlag. ISBN 0-387-97245-5.
18. ^ E. L. Ince, Ordinary Differential Equations, Dover Publications, 1958, ISBN 0-486-60349-0
19. ^ Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0-486-49510-8
21. ^ Terence Tao, 2011. An Introduction to Measure Theory. American Mathematical Society.
22. ^ Hildebrand, F. B. (1974). Introduction to Numerical Analysis (2nd ed.). McGraw-Hill. ISBN 0-07-028761-9.
24. ^ Rabiner, L. R.; Gold, B. (1975). Theory and Application of Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall. ISBN 0-13-914101-4.
External links[edit] |
15ecc27e1e218ae4 | Listen & Subscribe
Get The Latest Finding Genius Podcast News Delivered Right To Your Inbox
Dr. Robin Smith, Lecturer in Physics at Sheffield Hallam University, UK, delivers an insightful overview of his work in nuclear physics research topics and experiments in nuclear physics.
Dr. Smith earned his Ph.D. in nuclear physics at the University of Birmingham under the guidance of Dr. C. Wheldon and Prof. M. Freer. He is a distinguished lecturer in physics and specializes in multiple fields, including the following: nuclear data, nuclear structure, nuclear astrophysics, radiation detection, atomic nuclei and more.
Dr. Smith talks about nuclear physics and general physics and how he came to his areas of specialty. As he explains, he undertook projects in his senior year at university that involved smashing nuclei, which really got his interest moving in the direction of nuclear physics. Dr. Smith explains why he studies the atomic nucleus in detail, discussing the building blocks—atoms, and historical perspectives on the atom. He explains the atom’s structure and the density of the atomic nucleus, citing examples for comparison. Dr. Smith goes on to explain that in nature we have four fundamental forces that govern all matter within the universe—gravity, the electromagnetic force, the weak force, and the strong force. Dr. Smith details how these forces work, discussing gravity and its effects and the binding structures.
The nuclear physics expert discusses quantum mechanics, and how it is derived from the Schrödinger equation, which is a linear partial differential equation that precisely describes the general wave function, or state function, etc., of a quantum-mechanical system. Continuing, Dr. Smith discusses stars in our galaxy, the heavy elements, and the forces that exist, detailing how his research relates. He discusses carbon and excited states within nuclei and some of the theories that exist regarding molecules and molecular physics in general.
As Dr. Smith extends his discussion, he explains some of the methods they use to gather their scientific data. He explains how they study energies and what the data reveals in regard to decay processes. Regarding his research in colliding, Dr. Smith states that decay behavior, specifically regarding carbon-12, the most common of all-natural carbon isotopes, does appear to change dependent upon what projectile is being fired at the carbon-12. In essence, the environment does appear to affect the outcome of the reaction.
Wrapping up, Dr. Smith discusses the work and theories of Sir Fred Hoyle, the English astronomer who famously formulated the theory of stellar nucleosynthesis.
In this podcast:
• What exactly is quantum mechanics?
• An overview of the atomic nucleus
• What are the forces that exist in our galaxy?
Latest Podcasts
Accessibility Close Menu
Accessibility menu Accessibility menu Accessibility menu
× Accessibility Menu CTRL+U |
dff9332c65ceeed5 | Wednesday, November 6, 2013
old posts from
This is a collection of old blog posts, going back to 2006. For some strange reason I thought it would be a good idea to have two blogs. They have been migrated here from
a philosophy of science primer - part III
• part I: some history of science and logical empiricism,
• part II: problems of logical empiricism, critical rationalism and its problems.
After the unsuccessful attempts to found science on common sense notions as seen in the programs of logical empiricism and critical rationalism, people looked for new ideas and explanations.
the thinker
The Kuhnian View
Thomas Kuhn’s enormously influential work on the history of science is called the Structure of Scientific Revolutions. He revised the idea that science is an incremental process accumulating more and more knowledge. Instead, he identified the following phases in the evolution of science:
• prehistory: many schools of thought coexist and controversies are abundant,
• history proper: one group of scientists establishes a new solution to an existing problem which opens the doors to further inquiry; a so called paradigm emerges,
• paradigm based science: unity in the scientific community on what the fundamental questions and central methods are; generally a problem solving process within the boundaries of unchallenged rules (analogy to solving a Sudoku),
• crisis: more and more anomalies and boundaries appear; questioning of established rules,
• revolution: a new theory and weltbild takes over solving the anomalies and a new paradigm is born.
Another central concept is incommensurability, meaning that proponents of different paradigms cannot understand the other’s point of view because they have diverging ideas and views of the world. In other words, every rule is part of a paradigm and there exist no trans-paradigmatic rules.
This implies that such revolutions are not rational processes governed by insights and reason. In the words of Max Planck (the founder of quantum mechanics; from his autobiography):
Kuhn gives additional blows to a commonsensical foundation of science with the help of Norwood Hanson and Willard Van Orman Quine:
• every human observation of reality contains an a priori theoretical framework,
• underdetermination of belief by evidence: any evidence collected for a specific claim is logically consistent with the falsity of the claim,
• every experiment is based on auxiliary hypotheses (initial conditions, proper functioning of apparatus, experimental setup,…).
People slowly started to realize that there are serious consequences in Kuhn’s ideas and the problems faced by the logical empiricists and critical rationalists in establishing a sound logical and empirical foundation of science:
• postmodernism,
• constructivism or the scoiology of science,
• relativism.
Modernism describes the development of Western industrialized society since the beginning of the 19th Century. A central idea was that there exist objective true beliefs and that progression is always linear.
Postmodernism replaces these notions with the belief that many different opinions and forms can coexist and all find acceptance. Core ideas are diversity, differences and intermingling. In the 1970s it is seen to enter scientific and cultural thinking.
Postmodernism has taken a bad rap from scientists after the so called Sokal affair, where physicist Alan Sokal got a nonsensical paper published in the journal of postmodern cultural studies, by flattering the editors ideology with nonsense that sounds good.
Postmodernims has been associated with scepticism and solipsism, next to relativism and constructivism.
Notable scientists identifiable as postmodernists are Thomas Kuhn, David Bohm and many figures in the 20th century philosophy of mathematics. As well as Paul Feyerabend, an influential philosopher of science.
To quote the Nobel laureate Steven Weinberg on Kuhnian revolutions:
If the transition from one paradigm to another cannot be judged by any external standard, then perhaps it is culture rather than nature that dictates the content of scientific theories.
Constructivism excludes objectivism and rationality by postulating that beliefs are always subject to a person’s cultural and theological embedding and inherent idiosyncrasies. It also goes under the label of the sociology of science.
In the words of Paul Boghossian (in his book Fear of Knowledge: Against Relativism and Constructivism):
Constructivism about rational explanation: it is never possible to explain why we believe what we believe solely on the basis of our exposure to the relevant evidence; our contingent needs and interests must also be invoked.
The proponents of constructivism go further:
[…] all beliefs are on a par with one another with respect to the causes of their credibility. It is not that all beliefs are equally true or equally false, but that regardless of truth and falsity the fact of their credibility is to be seen as equally problematic.
From Barry Barnes’ and David Bloor’s Relativism, Rationalism and the Sociology of Knowledge.
In its radical version, constructivism fully abandons objectivism:
• Objectivity is the illusion that observations are made without an observer(from the physicist Heinz von Foerster; my translation)
• Modern physics has conquered domains that display an ontology that cannot be coherently captured or understood by human reasoning (from the philosopher Ernst von Glasersfeld); my translation
In addition, radical constructivism proposes that perception never yields an image of reality but is always a construction of sensory input and the memory capacity of an individual. An analogy would be the submarine captain who has to rely on instruments to indirectly gain knowledge from the outside world. Radical constructivists are motivated by modern insights gained by neurobiology.
Historically, Immanuel Kant can be understood as the founder of constructivism. On a side note, the bishop George Berkeley went even as far as to deny the existence of an external material reality altogether. Only ideas and thought are real.
Another consequence of the foundations of science lacking commonsensical elements and the ideas of constructivism can be seen in the notion of relativism. If rationality is a function of our contingent and pragmatic reasons, then it can be rational for a group A to believe P, while at the same time it is rational for group B to believe in negation of P.
Although, as a philosophical idea, relativism goes back to the Greek Protagoras, its implications are unsettling for the Western mid:anything goes (as Paul Feyerabend characterizes his idea of scientific anarchy). If there is no objective truth, no absolute values, nothing universal, then a great many of humanity’s century old concepts and beliefs are in danger.
It should however also be mentioned, that relativism is prevalent in Eastern thought systems, and as an example found in many Indian religions. In a similar vein, pantheism and holism are notions which are much more compatible with Eastern thought systems than Western ones.
Furthermore, John Stuart Mill’s arguments for liberalism appear to also work well as arguments for relativism:
• fallibility of people’s opinions,
• opinions that are thought to be wrong can contain partial truths,
• accepted views, if not challenged, can lead to dogmas,
• the significance and meaning of accepted opinions can be lost in time.
From his book On Liberty.
But could relativism be possibly true? Consider the following hints:
• Epistemological
• problems with perception: synaesthesia, altered states of consciousness (spontaneous, mystical experiences and drug induced),
• psychopathology describes a frightening amount of defects in the perception of reality and ones self,
• people suffering from psychosis or schizophrenia can experience a radically different reality,
• free will and neuroscience,
• synthetic happiness,
• cognitive biases.
• Ontological
• nonlocal foundation of quantum reality: entanglement, delayed choice experiment,
• illogical foundation of reality: wave-particle duality, superpositions, uncertainty, intrinsic probabilistic nature, time dilation (special relativity), observer/measurment problem in quantum theory,
• discreteness of reality: quanta of energy and matter, constant speed of light,
• nature of time: not present in fundamental theories of quantum gravity, symmetrical,
• arrow of time: why was the initial state of the universe very low in entropy?
• emergence, selforganization and structureformation.
In essence, perception doesn’t necessarily say much about the world around us. Consciousness can fabricate reality. This makes it hard to be rational. Reality is a really bizarre place. Objectivity doesn’t seem to play a big role.
And what about the human mind? Is this at least a paradox free realm? Unfortunately not. Even what appears as a consistent and logical formal thought system, i.e., mathematics, can be plagued by fundamental problems. Kurt Gödel proved that in every consistent non-contradictory system of mathematical axioms (leading to elementary arithmetic of whole numbers), there exist statements which cannot be proven or disproved in the system. So logical axiomatic systems are incomplete.
As an example Bertrand Russell encountered the following paradox: let R be the set of all sets that do not contain themselves as members. Is R an element of itself or not?
If you really accede to the idea that reality and the perception of reality by the human mind are very problematic concepts, then the next puzzles are:
• why has science been so fantastically successful at describing reality?
• why is science producing amazing technology at breakneck speed?
• why is our macroscopic, classical level of reality so well behaved and appears so normal although it is based on quantum weirdness?
• are all beliefs justified given the believers biography and brain chemistry?
a philosophy of science primer - part II
Continued from part I
The Problems With Logical Empiricism
The programme proposed by the logical empiricists, namely that science is built of logical statements resting on an empirical foundation, faces central difficulties. To summarize:
• it turns out that it is not possible to construct pure formal concepts that solely reflect empirical facts without anticipating a theoretical framework,
• how does one link theoretical concepts (electrons, utility functions in economics, inflational cosmology, Higgs bosons,…) to experiential notions?
• how to distinguish science from pseudo-science?
Now this may appear a little technical and not very interesting or fundamental to people outside the field of the philosophy of science, but it gets worse:
• inductive reasoning is invalid from a formal logical point of view!
• causality defies standard logic!
This is big news. So, just because I have witnessed the sun going up everyday of my life (single observations), I cannot say it will go up tomorrow (general law). Observation alone does not suffice, you need a theory. But the whole idea here is that the theory should come from observation. This leads to the dead end of circular reasoning.
But surely causality is undisputable? Well, apart from the problems coming from logic itself, there are extreme examples to be found in modern physics which undermine the common sense notion of a causal reality: quantum nonlocalitydelayed choice experiment.
But challenges often inspire people, so the story continues…
Critical Rationalism
OK, so the logical empiricists faced problems. Can’t these be fixed? The critical rationalists belied so. A crucial influence came from René Descartes’ and Gottfried Leibniz’ rationalism: knowledge can have aspects that do not stem from experience, i.e., there is an immanent reality to the mind.
The term critical refers to the fact, that insights gained by pure thought cannot be strictly justified but only critically tested with experience. Ultimate justifications lead to the so called Münchhausen trilemma, i.e., one of the following:
• an infinite regress of justifications,
• circular reasoning,
• dogmatic termination of reasoning.
The most influential proponent of critical rationalism was Karl Popper. His central claims were in essence
• use deductive reasoning instead of induction,
• theories can never be verified, only falsified.
Although there are similarities with logical empiricism (empirical basis, science is a set of theoretical constructs), the idea is that theories are simply invented by the mind and are temporarily accepted until they can be falsified. The progression of science is hence seen as evolutionary process rather than a linear accumulation of knowledge.
Sounds good, so what went wrong with this ansatz?
The Problems With Critical Rationalism
In a nutshell:
• basic formal concepts cannot be derived from experience without induction; how can they be shown to be true?
• deduction turns out to be just as tricky as induction,
• what parts of a theory need to be discarded once it is falsified?
To see where deduction breaks down, a nice story by Lewis Carroll (the mathematician who wrote the Alice in Wonderland stories): What the tortoise Said to Achilles.
If deduction goes down the drain as well, not much is left to ground science on notions of logic, rationality and objectivity. Which is rather unexpected of an enterprise that in itself works amazingly well employing just these concepts.
Explanations in Science
And it gets worse. Inquiries into the nature of scientific explanation reveal further problems. It is based on Carl Hempel’s and Paul Oppenheim’s formalisation of scientific inquiry in natural language. Two basic schemes are identified: deductive-nomological and inductive-statistical explanations. The idea is to show that what is being explained (the explanandum) is to be expected on the grounds of these two types of explanations.
The first tries to explain things deductively in terms of regularities and exact laws (nomological). The second uses statistical hypotheses and explains individual observations inductively. Albeit very formal, this inquiry into scientific inquiry is very straightforward and commonsensical.
Again, the programme fails:
• can’t explain singular causal events,
• asymmetric (a change in the air pressure explains the readings on a barometer, however, the barometer doesn’t explain why the air pressure changed),
• many explanations are irrelevant,
• as seen before, inductive and deductive logic is controversial,
• how to employ probability theory in the explanation?
So what next? What are the consequences of these unexpected and spectacular failings of the most simplest premises one would wish science to be grounded on (logic, empiricism, causality, common sense, rationality, …)?
The discussion is ongoing and isn’t expected to be resolved soon. Seepart III
a philosophy of science primer - part I
Naively one would expect science to adhere to two basic notions:
• common sense, i.e., rationalism,
• observation and experiments, i.e., empiricism.
Interestingly, both concepts turn out to be very problematic if applied to the question of what knowledge is and how it is acquired. In essence, they cannot be seen as a foundation for science.
But first a little history of science…
Classical Antiquity
The Greek philosopher Aristotle was one of the first thinkers to introduce logic as a means of reasoning. His empirical method was driven by gaining general insights from isolated observations. He had a huge influence on the thinking within the Islamic and Jewish traditions next to shaping Western philosophy and inspiring thinking in the physical sciences.
Modern Era
Nearly two thousand years later, not much changed. Francis Bacon (the philosopher, not the painter) made modifications to Aristotle’s ideas, introducing the so called scientific method where inductive reasoning plays an important role. He paves the way for a modern understanding of scientific inquiry.
Approximately at the same time, Robert Boyle was instrumental in establishing experiments as the cornerstone of physical sciences.
Logical Empiricism
So far so good. By the early 20th Century the notion that science is based on experience (empiricism) and logic, and where knowledge is intersubjectively testable, has had a long history.
The philosophical school of logical empiricism (or logical positivism) tries to formalise these ideas. Notable proponents were Ernst Mach, Ludwig Wittgenstein, Bertrand Russell, Rudolf Carnap, Hans Reichenbach, Otto Neurath. Some main influences were:
• David Hume’s and John Locke’s empiricism: all knowledge originates from observation, nothing can exist in the mind which wasn’t before in the senses,
• Auguste Comte’ and John Stuart Mills’ positivism: there exists no knowledge outside of science.
In this paradigm (see Thomas Kuhn a little later) science is viewed as a building comprised of logical terms based on an empirical foundation. A theory is understood as having the following structure: observation -> empirical concepts -> formal notions -> abstract law. Basically a sequence of ever higher abstraction.
This notion of unveiling laws of nature by starting with individual observations is called induction (the other way round, starting with abstract laws and ending with a tangible factual description is called deduction, see further along).
And here the problems start to emerge. See part II
Stochastic Processes and the History of Science: From Planck to Einstein
How are the notions of randomness, i.e., stochastic processes, linked to theories in physics and what have they got to do with options pricing in economics?
How did the prevailing world view change from 1900 to 1905?
What connects the mathematicians Bachelier, Markov, Kolmogorov, Ito to the physicists Langevin, Fokker, Planck, Einstein and the economists Black, Scholes, Merton?
The Setting
• Science up to 1900 was in essence the study of solutions of differential equations (Newton’s heritage);
• Was very successful, e.g., Maxwell’s equations: four differential equations describing everything about (classical) electromagnetism;
• Prevailing world view:
• Deterministic universe;
• Initial conditions plus the solution of differential equation yield certain prediction of the future.
Three Pillars
By the end of the 20th Century, it became clear that there are (at least?) two additional aspects needed in a completer understanding of reality:
• Inherent randomness: statistical evaluations of sets of outcomes of single observations/experiments;
• Quantum mechanics (Planck 1900; Einstein 1905) contains a fundamental element of randomness;
• In chaos theory (e.g., Mandelbrot 1963) non-linear dynamics leads to a sensitivity to initial conditions which renders even simple differential equations essentially unpredictable;
• Complex systems (e.g., Wolfram 1983), i.e., self-organization and emergent behavior, best understood as outcomes of simple rules.
Stochastic Processes
• Systems which evolve probabilistically in time;
• Described by a time-dependent random variable;
• The probability density function describes the distribution of the measurements at time t;
• Prototype: The Markov process.
For a Markov process, only the present state of the system influences its future evolution: there is no long-term memory. Examples:
• Wiener process or Einstein-Wiener process or Brownian motion:
• Introduced by Bachelier in 1900;
• Continuous (in t and the sample path)
• Increments are independent and drawn from a Gaussian normal distribution;
• Random walk:
• Discrete steps (jumps), continuous in t;
• Is a Wiener process in the limit of the step size going to zero.
To summarize, there are three possible characteristics:
1. Jumps (in sample path);
2. Drift (of the probability density function);
3. Diffusion (widening of the probability density function).
Probability distribution function showing drift and diffusion:
Probability distribution function with drift and diffusion
But how to deal with stochastic processes?
The Micro View
• Presented a theory of Brownian motion in 1905;
• New paradigm: stochastic modeling of natural phenomena; statistics as intrinsic part of the time evolution of system;
• Mean-square displacement of Brownian particle proportional to time;
• Equation for the Brownian particle similar to a diffusion (differential) equation.
• Presented a new derivation of Einstein’s results in 1908;
• First stochastic differential equation, i.e., a differential equation of a “rapidly and irregularly fluctuating random force” (today called a random variable)
• Solutions of differential equation are random functions.
However, no formal mathematical grounding until 1942, when Ito developed stochastic calculus:
• Langevin’s equations interpreted as Ito stochastic differential equations using Ito integrals;
• Ito integral defined to deal with non-differentiable sample paths of random functions;
• Ito lemma (generalized integration rule) used to solve stochastic differential equations.
• The Markov process is a solution to a simple stochastic differential equation;
• The celebrated Black-Scholes option pricing formula is a stochastic differential equation employing Brownian motion.
The Fokker-Planck Equation: Moving To The Macro View
• The Langevin equation describes the evolution of the position of a single “stochastic particle”;
• The Fokker-Planck equation describes the behavior of a large population of of “stochastic particles”;
• Formally: The Fokker-Planck equation gives the time evolution of the probability density function of the system as a function of time;
• Results can be derived more directly using the Fokker-Planck equation than using the corresponding stochastic differential equation;
• The theory of Markov processes can be developed from this macro point of view.
The Historical Context
• Developed a theory of Brownian motion (Einstein-Wiener process) in 1900 (five years before Einstein, and long before Wiener);
• Was the first person to use a stochastic process to model financial systems;
• Essentially his contribution was forgotten until the late 1950s;
• Black, Scholes and Merton’s publication in 1973 finally gave Brownian motion the break-through in finance.
• Founder of quantum theory;
• 1900 theory of black-body radiation;
• Central assumption: electromagnetic energy is quantized, E = h v;
• In 1914 Fokker derives an equation on Brownian motion which Planck proves;
• Applies the Fokker-Planck equation as quantum mechanical equation, which turns out to be wrong;
• In 1931 Kolmogorov presented two fundamental equations on Markov processes;
• It was later realized, that one of them was actually equivalent to the Fokker-Planck equation.
1905 “Annus Mirabilis” publications. Fundamental paradigm shifts in the understanding of reality:
• Photoelectric effect:
• Explained by giving Planck’s (theoretical) notion of energy quanta a physical reality (photons),
• Further establishing quantum theory,
• Winning him the Nobel Prize;
• Brownian motion:
• First stochastic modeling of natural phenomena,
• The experimental verification of the theory established the existence of atoms, which had been heavily debate at the time,
• Einstein’s most frequently cited paper, in the fields of biology, chemistry, earth and environmental sciences, life sciences, engineering;
• Special theory of relativity: the relative speeds of the observers’reference frames determines the passage of time;
• Equivalence of energy and mass (follows from special relativity): E = m c^2.
Einstein was working at the Patent Office in Bern at the time and submitted his Ph.D. to the University of Zurich in July 1905.
Later Work:
• 1915: general theory of relativity, explaining gravity in terms of the geometry (curvature) of space-time;
• Planck also made contributions to general relativity;
• Although having helped in founding quantum mechanics, he fundamentally opposed its probabilistic implications: “God does not throw dice”;
• Dreams of a unified field theory:
• Spend his last 30 years or so trying to (unsuccessfully) extend the general theory of relativity to unite it with electromagnetism;
• Kaluza and Klein elegantly managed to do this in 1921 by developing general relativity in five space-time dimensions;
• Today there is still no empirically validated theory able to explain gravity and the (quantum) Standard Model of particle physics, despite intense theoretical research (string/M-theory, loop quantum gravity);
• In fact, one of the main goals of the LHC at CERN (officially operational on the 21st of October 2008) is to find hints of such a unified theory (supersymmetric particles, higher dimensions of space).
Technorati ,
laws of nature
What are Laws of Nature?
Regularities/structures in a highly complex universe
Allow for predictions
• Dependent on only a small set of conditions (i.e., independent of very many conditions which could possibly have an effect)
…but why are there laws of nature and how can these laws be discovered and understood by the human mind?
No One Knows!
• G.W. von Leibniz in 1714 (Principes de la nature et de la grâce):
• Why is there something rather than nothing? For nothingness is simpler and easier than anything
• E. Wigner, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences“, 1960:
• […] the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and […] there is no rational explanation for it
• […] it is not at all natural that “laws of nature” exist, much less that man is able to discover them
• […] the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them
• […] fundamentally, we do not know why our theories work so well
In a Nutshell
• We happen to live in a structured, self-organizing, and fine-tuned universe that allows the emergence of sentient beings (anthropic principle)
• The human mind is capable of devising formal thought systems (mathematics)
• Mathematical models are able to capture and represent the workings of the universe
See also this post: in a nutshell.
The Fundamental Level of Reality: Physics
Mathematical models of reality are independent of their formal representation: invariance and symmetry
• Classical mechanics: invariance of the equations under transformations (e.g., time => conservation of energy)
• Gravitation (general relativity): geometry and the independence of the coordinate system (covariance)
• The other three forces of nature (unified in quantum field theory): mathematics of symmetry and special kind of invariance
See also these posts: funadamentalinvariant thinking.
Towards Complexity
• Physics was extremely successful in describing the inanimate world the in the last 300 years or so
• But what about complex systems comprised of many interacting entities, e.g., the life and social sciences?
• The rest is chemistryC. D. Anderson in 1932; echoing the success of a reductionist approach to understanding the workings of nature after having discovered the positron
• At each stage [of complexity] entirely new laws, concepts, and generalizations are necessary […]. Psychology is not applied biology, nor is biology applied chemistryP. W. Anderson in 1972; pointing out that the knowledge about the constituents of a system doesn’t reveal any insights into how the system will behave as a whole; so it is not at all clear how you get from quarks and leptons via DNA to a human brain…
Complex Systems: Simplicity
The Limits of Physics
• Closed-form solutions to analytical expressions are mostly only attainable if non-linear effects (e.g., friction) are ignored
• Not too many interacting entities can be considered (e.g., three body problem)
The Complexity of Simple Rules
• S. Wolfram’s cellular automaton rule 110: neither completely random nor completely repetitive
• [The] results [simple rules give rise to complex behavior] where were so surprising and dramatic that as I gradually came to understand them, they forced me to change my whole view of science […]; S. Wolfram reminiscing on his early work on cellular automaton in the 80s (”New Kind of Science”, pg. 19)
Complex Systems: The Paradigm Shift
• The interaction of entities (agents) in a system according to simple rules gives rise to complex behavior
• The shift from mathematical (analytical) models to algorithmic computations and simulations performed in computers (only this bottom-up approach to simulating complex systems has been fruitful, all top-down efforts have failed: try programming swarming behaviorant foragingpedestrian/traffic dynamics,… not using simple local interaction rules but with a centralized, hierarchical setup!)
• Understanding the complex system as a network of interactions (graph theory), where the complexity (or structure) of the individual nodes can be ignored
• Challenge: how does the macro behavior emerge from the interaction of the system elements on the micro level?
See also these posts: complexswarm theorycomplex networks.
Laws of Nature Revisited
So are there laws of nature to be found in the life and social sciences?
• Yes: scaling (or power) laws
• Complex, collective phenomena give rise to power laws […] independent of the microscopic details of the phenomenon. These power laws emerge from collective action and transcend individual specificities. As such, they are unforgeable signatures of a collective mechanism; J.P. Bouchaud in “Power-laws in Economy and Finance: Some Ideas from Physics“, 2001
Scaling Laws
Scaling-law relations characterize an immense number of natural patterns (from physics, biology, earth and planetary sciences, economics and finance, computer science and demography to the social sciences) prominently in the form of
• scaling-law distributions
• scale-free networks
• cumulative relations of stochastic processes
A scaling law, or power law, is a simple polynomial functional relationship
f(x) = a x^k <=> Y = (X/C)^E
Scaling laws
• lack a preferred scale, reflecting their (self-similar) fractal nature
• are usually valid across an enormous dynamic range (sometimes many orders of magnitude)
See also these posts: scaling lawsbenford’s law.
Scaling Laws In FX
• Event counts related to price thresholds
• Price moves related to time thresholds
• Price moves related to price thresholds
• Waiting times related to price thresholds
FX scaling law
Scaling Laws In Biology
So-called allometric laws describe the relationship between two attributes of living organisms as scaling laws:
• The metabolic rate B of a species is proportional to its mass M: B ~ M^(3/4)
• Heartbeat (or breathing) rate T of a species is proportional to its mass: T ~ M^(-1/4)
• Lifespan L of a species is proportional to its mass: L ~ M^(1/4)
• Invariants: all species have the same number of heart beats in their lifespan (roughly one billion)
allometric law
(Fig. G. West)
G. West (et. al) proposes an explanation of the 1/4 scaling exponent, which follow from underlying principles embedded in the dynamical and geometrical structure of space-filling, fractal-like, hierarchical branching networks, presumed optimized by natural selection: organisms effectively function in four spatial dimensions even though they physically exist in three.
• The natural world possesses structure-forming and self-organizing mechanisms leading to consciousness capable of devising formal thought systems which mirror the workings of the natural world
• There are two regimes in the natural world: basic fundamental processes and complex systems comprised of interacting agents
• There are two paradigms: analytical vs. algorithmic (computational)
• There are ‘miracles’ at work:
• the existence of a universe following laws leading to stable emergent features
• the capability of the human mind to devise formal thought systems
• the overlap of mathematics and the workings of nature
• the fact that complexity emerges from simple rules
• There are basic laws of nature to be found in complex systems, e.g., scaling laws
Technorati ,
animal intelligence
This is the larger lesson of animal cognition research: It humbles us.
We are not alone in our ability to invent or plan or to contemplate ourselves—or even to plot and lie.
Many scientists believed animals were incapable of any thought. They were simply machines, robots programmed to react to stimuli but lacking the ability to think or feel.
We’re glimpsing intelligence throughout the animal kingdom.
Copyright Vincent J. Musi, National Geographic
A dog with a vocabulary of 340 words. A parrot that answers “shape” if asked what is different, and “color” if asked what is the same, while being showed two items of different shape and same color. An octopus with “distinct personality” that amuses itself by shooting water at plastic-bottle targets (the first reported invertebrate play behavior). Lemurs with calculatory abilities. Sheep able to recognize faces (of other sheep and humans) long term and that can discern moods. Crows able to make and use tools (in tests, even out of materials never seen before). Human-dolphin communication via an invented sign language (with simple grammar). Dolphins ability to correctly interpret on the first occasion instructions given by a person displayed on a TV screen.
This may only be the tip of the iceberg…
Read the article Animal Minds in National Geographic`s March 2008 edition.
Ever think about vegetarianism?
Technorati ,
complex networks
The study of complex networks was sparked at the end of the 90s with two seminal papers, describing their universal:
• small-worlds property [1],
• and scale-free nature [2] (see also this older post: scaling laws).
weighted network
unweighted network
Today, networks are ubiquitous: phenomena in the physical world (e.g., computer networks, transportation networks, power grids, spontaneous synchronization of systems of lasers), biological systems (e.g., neural networks, epidemiology, food webs, gene regulation), and social realms (e.g., trade networks, diffusion of innovation, trust networks, research collaborations, social affiliation) are best understood if characterized as networks.
The explosion of this field of research was and is coupled with the increasing availability of
• huge amounts of data, pouring in from neurobiology, genomics, ecology, finance and the Word-wide Web, …,
• computing power and storage facilities.
The new paradigm states that it is best to understand a complex system, if it is mapped to a network. I.e., the links represent the some kind of interaction and the nodes are stripped of any intrinsic quality. So, as an example, you can forget about the complexity of the individual bird, if you model the flocks swarming behavior. (See these older posts: complexfundamentalswarm theoryin a nutshell.)
Only in the last years has the attention shifted from this topological level of analysis (either links are present or not) to incorporate weights of links, giving the strength relative to each other. Albeit being harder to tackle, these networks are closer to the real-world system it is modeling.
However, there is still one step missing: also the vertices of the network can be assigned with a value, which acts as a proxy for some real-world property that is coded into the network structure.
The two plots above illustrate the difference if the same network is visualized [3] using weights and values assigned to the vertices (left) or simply plotted as a binary (topological) network (right)…
[1] Strogatz S. H. and Watts D. J., 1998, Collective Dynamics of ‘Small-World’ Networks,
Nature, 393, 440–442.
[2] Albert R. and Barabasi A.-L., 1999, Emergence of Scaling in Random Networks,
[3] Cuttlefish Adaptive NetWorkbench and Layout
cool links…
think statistics are boring, irrelevant and hard to understand? well, think again.
two examples of visually displaying important information in an amazingly cool way:
territory size shows the proportion of all people living on less than or equal to US$1 in purchasing power parity a day. displays a large collection of world maps, where territories are re-sized on each map according to the subject of interest. sometimes an image says more than a thousand words…
want to see the global evolution of life expectancy vs. income per capita from 1975 to 2003? and additionally display the co2 emission per capita? choose indicators from areas as diverse as internet users per 1′000 people to contraceptive use amongst adult women and watch the animation.
gapminder is a fantastic tool that really makes you think…
work in progress…
Some of the stuff I do all week…
Complex Networks
Visualizing a shareholder network:
The underlying network visualization framework is JUNG, with theCuttlefish adaptive networkbench and layout algorithm (coming soon). The GUI uses Swing.
Stochastic Time Series
Scaling laws in financial time series:
A Java framework allowing the computation and visualization of statistical properties. The GUI is programmed using SWT.
plugin of the month
The Firefox add-on Gspace allows you to use Gmail as a file server:
This extension allows you to use your Gmail Space (4.1 GB and growing) for file storage. It acts as an online drive, so you can upload files from your hard drive and access them from every Internet capable system. The interface will make your Gmail account look like a FTP host.
tech dependence…
Because technological advancement is mostly quite gradual, one hardly notices it creeping into ones life. Only if you would instantly remove these high tech commodities, you’d realize how dependent one has become.
A random list of ‘nonphysical’ things I wouldn’t want to live without anymore:
• everything you ever wanted to know — and much more
• (e.g., news, scholar, maps, webmaster tools, …): basically the internet;-)
• Web 2.0 communities (e.g.,,,,,,, …): your virtual social network
• towards the babel fish
• recommendations from the fat tail of the probability distribution
• Web browsers (e.g., Firefox): your window to the world
• Version control systems (e.g., Subversion): get organized
• CMS (e.g., TYPO3): disentangle content from design on your web page and more
• LaTeX typesetting software (btw, this is not a fetish;-): the only sensible and aesthetic way to write scientific documents
• Wikies: the wonderful world of unstructured collaboration
• Blogs: get it out there
• Java programming language: truly platform independent and with nice GUI toolkits (SWT, Swing, GWT); never want to go back to C++ (and don’t even mention C# or .net)
• Eclipse IDE: how much fun can you have while programming?
• MySQL: your very own relational database (the next level: db4o)
• PHP: ok, Ruby is perhaps cooler, but PHP is so easy to work with (e.g., integrating MySQL and web stuff)
• Dynamic DNS (e.g., let your home computer be a node of the internet
• Web server (e.g., Apache 2): open the gateway
• CSS: ok, if we have to go with HTML, this helps a lot
• VoIP (e.g., Skype): use your bandwidth
• P2P (e.g., BitTorrent): pool your network
• Video and audio compression (e.g., MPEG, MP3, AAC, …): information theory at its best
• Scientific computing (R, Octave, gnuplot, …): let your computer do the work
• Open source licenses (Creative Commons, Apache, GNU GPL, …): the philosophy!
• Object-oriented programming paradigm: think design patterns
• Rich Text editors: online WYSIWYG editing, no messing around with HTML tags
• SSH network protocol: secure and easy networking
• Linux Shell-Programming (”grep”, “sed”, “awk”, “xargs”, pipes, …): old school Unix from the 70s
• E-mail (e.g., IMAP): oops, nearly forgot that one (which reminds me of something i really, really could do without: spam)
• Graylisting: reduce spam
• Debian (e.g., Kubuntu): the basis for it all
• apt-get package management system: a universe of software at your fingertips
• Compiz Fusion window manager: just to be cool…
It truly makes one wonder, how all this cool stuff can come for free!!!
climate change 2007
Confused about the climate? Not sure what’s happening? Exaggerated fears or impending cataclysm?
A good place to start is a publication by Swiss Re. It is done in a straightforward, down-to-earth, no-bullshit and sane manner. The source to the whole document is given at the bottom.
Executive Summary
The Earth is getting warmer, and it is a widely held view in the scientific community that much of the recent warming is due to human activity. As the Earth warms, the net effect of unabated climate change will ultimately lower incomes and reduce public welfare. Because carbon dioxide (CO₂) emissions build up slowly, mitigation costs rise as time passes and the level of CO₂ in the atmosphere increases. As these costs rise, so too do the benefits of reducing CO₂ emissions, eventually yielding net positive returns. Given how CO₂ builds up and remains in the atmosphere, early mitigation efforts are highly likely to put the global economy on a path to achieving net positive benefits sooner rather than later. Hence, the time to act to reduce these emissions is now.The climate is what economists call a “public good”: its benefits are available to everyone and one person’s enjoyment and use of it does not affect another’s. Population growth, increased economic activity and the burning of fossil fuels now pose a threat to the climate. The environment is a free resource, vulnerable to overuse, and human activity is now causing it to change. However, no single entity is responsible for it or owns it. This is referred to as the “tragedy of the commons”: everyone uses it free of charge and eventually depletes or damages it. This is why government intervention is necessary to protect our climate.
Climate is global: emissions in one part of the world have global repercussions. This makes an international government response necessary. Clearly, this will not be easy. The Kyoto Protocol for reducing CO₂ emissions has had some success, but was not considered sufficiently fair to be signed by the United States, the country with the highest volume of CO₂ emissions. Other voluntary agreements, such as the Asia-Pacific Partnership on Clean Development and Climate – which was signed by the US – are encouraging, but not binding. Thus, it is essential that governments implement national and international mandatory policies to effectively reduce carbon emissions in order to ensure the well-being of future generations.
The pace, extent and effects of climate change are not known with certainty. In fact, uncertainty complicates much of the discussion about climate change. Not only is the pace of future economic growth uncertain, but also the carbon dioxide and equivalent (CO₂e) emissions associated with economic growth. Furthermore, the global warming caused by a given quantity of CO₂e emissions is also uncertain, as are the costs and impact of temperature increases.
Though uncertainty is a key feature of climate change and its impact on the global economy, this cannot be an excuse for inaction. The distribution and probability of the future outcomes of climate change are heavily weighted towards large losses in global welfare. The likelihood of positive future outcomes is minor and heavily dependent upon an assumed maximum climate change of 2° Celsius above the pre-industrial average. The probability that a “business as usual” scenario – one with no new emission-mitigation policies – will contain global warming at 2° Celsius is generally considered as negligible. Hence, the “precautionary principle” – erring on the safe side in the face of uncertainty – dictates an immediate and vigorous global mitigation strategy for reducing CO₂e emissions.
There are two major types of mitigation strategies for reducing greenhouse gas emissions: a cap-and-trade system and a tax system. The cap-and-trade system establishes a quantity target, or cap, on emissions and allows emission allocations to be traded between companies, industries and countries. A tax on, for example, carbon emissions could also be imposed, forcing companies to internalize the cost of their emissions to the global climate and economy. Over time, quantity targets and carbon taxes would need to become increasingly restrictive as targets fall and taxes rise. Though both systems have their own merits, the cap-and-trade policy has an edge over the carbon tax, given the uncertainty about the costs and benefits of reducing emissions. First, cap-and-trade policies rely on market mechanisms – fluctuating prices for traded emissions – to induce appropriate mitigating strategies, and have proved effective at reducing other types of noxious gases. Second, caps have an economic advantage over taxes when a given level of emissions is required. There is substantial evidence that emissions need to be capped to restrict global warming to 2 °C above preindustrial levels or a little more than 1 °C compared to today. Given that the stabilization of emissions at current levels will most likely result in another degree rise in temperature and that current economic growth is increasing emissions, the precautionary principle supports a cap-and-trade policy. Finally, cap-and-trade policies are more politically feasible and palatable than carbon taxes. They are more widely used and understood and they do not require a tax increase. They can be implemented with as much or as little revenue-generating capacity as desired. They also offer business and consumers a great deal of choice and flexibility. A cap-and-trade policy should be easier to adopt in a wide variety of political environments and countries.
Whichever system – cap-and-trade or carbon tax – is adopted, there are distributional issues that must be addressed. Under a quantity target, allocation permits have value and can be granted to businesses or auctioned. A carbon tax would raise revenues that could be recycled, for example, into research on energy-efficient technologies. Or the revenues could be used to offset inefficient taxes or to reduce the distributional aspects of the carbon tax.
Source: “The economic justification for imposing restraints on carbon emissions”, Swiss Re, Insights, 2007; PDF
scaling laws
Scaling-law relations characterize an immense number of natural processes, prominently in the form of
1. scaling-law distributions,
2. scale-free networks,
3. cumulative relations of stochastic processes.
A scaling law, or power law, is a simple polynomial functional relationship, i.e., f(x) depends on a power of x. Two properties of such laws can easily be shown:
• a logarithmic mapping yields a linear relationship,
• scaling the function’s argument x preserves the shape of the functionf(x), called scale invariance.
See (Sornette, 2006).
Scaling-Law Distributions
Scaling-law distributions have been observed in an extraordinary wide range of natural phenomena: from physics, biology, earth and planetary sciences, economics and finance, computer science and demography to the social sciences; see (Newman, 2004). It is truly amazing, that such diverse topics as
• the size of earthquakes, moon craters, solar flares, computer files, sand particle, wars and price moves in financial markets,
• the number of scientific papers written, citations received by publications, hits on webpages and species in biological taxa,
• the sales of music, books and other commodities,
• the population of cities,
• the income of people,
• the frequency of words used in human languages and of occurrences of personal names,
• the areas burnt in forest fires,
are all described by scaling-law distributions. First used to describe the observed income distribution of households by the economist Pareto in 1897, the recent advancements in the study of complex systems have helped uncover some of the possible mechanisms behind this universal law. However, there is as of yet no real understanding of the physical processes driving these systems.
Processes following normal distributions have a characteristic scale given by the mean of the distribution. In contrast, scaling-law distributions lack such a preferred scale. Measurements of scaling-law processes yield values distributed across an enormous dynamic range (sometimes many orders of magnitude), and for any section one looks at, the proportion of small to large events is the same. Historically, the observation of scale-free or self-similar behavior in the changes of cotton prices was the starting point for Mandelbrot’s research leading to the discovery of fractal geometry; see (Mandelbrot, 1963).
It should be noted, that although scaling laws imply that small occurrences are extremely common, whereas large instances are quite rare, these large events occur nevertheless much more frequently compared to a normal (or Gaussian) probability distribution. For such distributions, events that deviate from the mean by, e.g., 10 standard deviations (called “10-sigma events”) are practically impossible to observe. For scaling law distributions, extreme events have a small but very real probability of occurring. This fact is summed up by saying that the distribution has a “fat tail” (in the terminology of probability theory and statistics, distributions with fat tails are said to be leptokurtic or to display positive kurtosis) which greatly impacts the risk assessment. So although most earthquakes, price moves in financial markets, intensities of solar flares, … will be very small, the possibility that a catastrophic event will happen cannot be neglected.
Scale-Free Networks
Another modern research field marked by the ubiquitous appearance of scaling-law relations is the study of complex networks. Many different phenomena in the physical (e.g., computer networks, transportation networks, power grids, spontaneous synchronization of systems of lasers), biological (e.g., neural networks, epidemiology, food webs, gene regulation), and social (e.g., trade networks, diffusion of innovation, trust networks, research collaborations, social affiliation) worlds can be understood as network based. In essence, the links and nodes are abstractions describing the system under study via the interactions of the elements comprising it.
In graph theory, the degree of a node (or vertex), k, describes the number of links (or edges) the node has to other nodes. The degree distribution gives the probability distribution of degrees in a network. For scale-free networks, one finds that the probability that a node in the network connects with k other nodes follows a scaling law. Again, this power law is characterized by the existence of highly connected hubs, whereas most nodes have small degrees.
Scale-free networks are
• characterized by high robustness against random failure of nodes, but susceptible to coordinated attacks on the hubs, and
• thought to arise from a dynamical growth process, called preferential attachment, in which new nodes favor linking to existing nodes with high degrees.
It should be noted, that another prominent feature of real-world networks, namely the so-called small-world property, is separate from a scale-free degree distribution, although scale-free networks are also small-world networks; (Strogatz and Watts, 1998). For small-world networks, although most nodes are not neighbors of one another, most nodes can be reached from every other by a surprisingly small number of hops or steps.
Most real-world complex networks - such as those listed at the beginning of this section - show both scale-free and small-world characteristics.
Some general references include (Barabasi, 2002), (Albert and Barabasi, 2001), and (Newman, 2003). Emergence of scale-free networks in the preferential attachment model (Albert and Barabasi, 1999). An alternative explanation to preferential attachment, introducing non-topological values (called fitness) to the vertices, is given in (Caldarelli et al., 2002).
Cumulative Scaling-Law Relations
Next to distributions of random variables, scaling laws also appear in collections of random variables, called stochastic processes. Prominent empirical examples are financial time-series, where one finds empirical scaling laws governing the relationship between various observed quantities. See (Guillaume et al., 1997) and (Dacorogna et al., 2001).
Albert R. and Barabasi A.-L., 2001, Statistical Mechanics of Complex Networks,
Barabasi A.-L., 2002, Linked — The New Science of Networks, Perseus Publishing, Cambridge, Massachusetts.
Caldarelli G., Capoccio A., Rios P. D. L., and Munoz M. A., 2002, Scale- free Networks without Growth or Preferential Attachment: Good get Richer,
Dacorogna M. M., Gencay R., Müller U. A., Olsen R. B., and Pictet O. V., 2001, An Introduction to High-Frequency Finance, Academic Press, San Diego, CA.
Guillaume D. M., Dacorogna M. M., Dave R. D., Müller U. A., Olsen R. B., and Pictet O. V., 1997, From the Bird’s Eye to the Microscope: A Survey of New Stylized Facts of the Intra-Daily Foreign Exchange Markets, Finance and Stochastics, 1, 95–129.
Mandelbrot B. B., 1963, The variation of certain speculative prices, Journal of Business, 36, 394–419.
Newman M. E. J., 2003, The Structure and Function of Complex Networks,
Newman M. E. J., 2004, Power Laws, Pareto Distributions and Zipf ’s Law,
Sornette D., 2006, Critical Phenomena in Natural Sciences, Series in Synergetics. Springer, Berlin, 2nd edition.
Nature, 393, 440–442.
See also this post: laws of nature.
swarm theory
National Geographic`s July 2007 edition: Swarm Theory
A single ant or bee isn’t smart, but their colonies are. The study of swarm intelligence is providing insights that can help humans manage complex systems.
benford’s law
In 1881 a result was published, based on the observation that the first pages of logarithm books, used at that time to perform calculations, were much more worn than the other pages. The conclusion was that computations of numbers that started with 1 were performed more often than others: if d denotes the first digit of a number the probability of its appearance is equal to log(d + 1).
The phenomenon was rediscovered in 1938 by the physicist F. Benford, who confirmed the “law” for a large number of random variables drawn from geographical, biological, physical, demographical, economical and sociological data sets. It even holds for randomly compiled numbers from newspaper articles. Specifically, Benford’s law, or the first-digit law, states, that the occurrence of a number with first digit 1 is with 30.1%, 2 with 17.6%, 3 with 12.5%, 4 with 9.7%, 5 with 7.9%, 6 with 6.7%, 7 with 5.8%, 8 with 5.1% and 9 with 4.6% probability. In general, the leading digit d ∈ [1, …, b−1] in base b ≥ 2 occurs with probability proportional to log_b(d + 1) − log_b(d) = log_b(1 + 1/d).
First explanations of this phenomena, which appears to suspend the notions of probability, focused on its logarithmic nature which implies a scale-invariant or power-law distribution. If the first digits have a particular distribution, it must be independent of the measuring system, i.e., conversions from one system to another don’t affect the distribution. (This requirement that physical quantities are independent of a chosen representation is one of the cornerstones of general relativity and called covariance.) So the common sense requirement that the dimensions of arbitrary measurement systems shouldn’t affect the measured physical quantities, is summarized in Benford’s law. In addition, the fact that many processes in nature show exponential growth is also captured by the law, which assumes that the logarithms of numbers are uniformly distributed.
So how come one observes random variables following normal and scaling-law distributions? In 1996 the phenomena was mathematically rigorously proven: if one repeatedly chooses different probability distribution and then randomly chooses a number according to that distribution, the resulting list of numbers will obey Benford’s law. Hence the law reflects the behavior of distributions of distributions.
Benford’s law has been used to detect fraud in insurance, accounting or expenses data, where people forging numbers tend to distribute their digits uniformly.
There is an interesting observation or conjecture to be made from the Mataphysics Map in the post what can we know?, concerning the nature of infinity.
The Finite
Many observations reveal a finite nature of reality:
• Energy comes in finite parcels (quatum mechanics)
• The knowledge one can have about quanta is a fixed value (uncertainty)
• Energy is conserved in the universe
• The speed of light has the same constant value for all observers (special relativity)
• The age of the universe is finite
• Information is finite and hence can be coded into a binary language
Newer and more radical theories propose:
• Space comes in finite parcels
• Time comes in finite parcels
• The universe is spatially finite
• The maximum entropy in any given region of space is proportional to the regions surface area and not its volume (this leads to the holographic principle stating that our three dimensional universe is a projection of physical processes taking place on a two dimensional surface surrounding it)
So finiteness appears to be an intrinsic feature of the Outer Reality box of the diagram.
There is in fact a movement in physics ascribing to the finiteness of reality, called Digital Philosophy. Indeed, this finiteness postulate is a prerequisite for an even bolder statement, namely, that the universe is one gigantic computer (a Turing complete cellular automata), where reality (thought and existence) is equivalent to computation. As mentioned above, the selforganizing structure forming evolution of the universe can be seen to produce ever more complex modes of information processing (e.g., storing data in DNA, thoughts, computations, simulations and perhaps, in the near future, quantum computations).
There is also an approach to quantum mechanics focussing on information stating that an elementary quantum system carries (is?) one bit of information. This can be seen to lead to the notions of quantisation, uncertainty and entanglement.
The Infinite
It should be noted that zero is infinity in disguise. If one lets the denominator of a fraction go to infinity, the result is zero. Historically, zero was discovered in the 3rd century BC in India and was introduced to the Western world by Arabian scholars in the 10th century AC. As ordinary as zero appears to us today, the great Greek mathematicians didn’t come up with such a concept.
Indeed, infinity is something intimately related to formal thought systems (mathematics). Irrational numbers have an infinite number of digits. There are two measures for infinity: countability and uncountablility. The former refers to infinite series as 1, 2, 3, … Whereas for the latter measure, starting from 1.0 one can’t even reach 1.1 because there are an infinite amount of numbers in the interval between 1.0 and 1.1. In geometry, points and lines are idealizations of dimension zero and one, respectively.
So it appears as though infinity resides only in the Inner Reality box of the diagram.
The Interface
If it should be true that we live in a finite reality with infinity only residing within the mind as a concept, then there should be some problems if one tries to model this finite reality with an infinity-harboring formalism.
Perhaps this is indeed so. In chaos theory, the sensitivity to initial conditions (butterfly effect) can be viewed as the problem of measuring numbers: the measurement can only have a finite degree of accuracy, whereas the numbers have, in principle, an infinite amount of decimal places.
In quatum gravity (the, as yet, unsuccessful merger of quantum mechanics and gravity) many of the inherent problems of the formalism could by bypassed, when a theory was proposed (string theory) that replaced (zero space) point particles with one dimensionally extended objects. Later incarnations, called M-theory, allowed for multidimensional objects.
In the above mentioned information based view of quantum mechanics, the world appears quantised because the information retrieved by our minds about the world is inevitably quantised.
So the puzzle deepens. Why do we discover the notion of infinity in our minds while all our experiences and observations of nature indicate finiteness?
medical studies
medical studies often contradict each other. results claiming to have “proven” some causal connection are confronted with results claiming to have “disproven” the link, or vice versa. this dilemma affects even reputable scientists publishing in leading medical journals. the topics are divers:
• high-voltage power supply lines and leukemia [1],
• salt and high blood pressure [1],
• heart diseases and sport [1],
• stress and breast cancer [1],
• smoking and breast cancer [1],
• praying and higher chances of healing illnesses [1],
• the effectiveness of homeopathic remedies and natural medicine,
• vegetarian diets and health,
• low frequency electromagnetic fields and electromagnetic hypersensitivity [2],
basically, this is understood to happen for three reasons:
• i.) the bias towards publishing positive results,
• ii.) incompetence in applying statistics,
• ii.) simple fraud.
publish or perish. in order the guarantee funding and secure the academic status quo, results are selected by their chance of being published.
an independent analysis of the original data used in 100 published studies exposed that roughly half of them showed large discrepancies in the original aims stated by the researchers and the reported findings, implying that the researchers simply skimmed the data for publishable material [3].
this proves fatal in combination with ii.) as every statistically significant result can occur (per definition) by chance in an arbitrary distribution of measured data. so if you only look long enough for arbitrary results in your data, you are bound to come up with something [1].
often, due to budget reasons, the numbers of test persons for clinical trials are simply too small to allow for statistical relevance. ref. [4] showed next to other things, that the smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
statistical significance - often evaluated by some statistics software package - is taken as proof without considering the plausibility of the result. many statistically significant results turn out to be meaningless coincidences after accounting for the plausibility of the finding [1].
one study showed that one third of frequently cited results fail a later verification [1].
another study documented that roughly 20% of the authors publishing in the magazine “nature” didn’t understand the statistical method they were employing [5].
iii.) a.)
two thirds of of the clinical biomedical research in the usa is supported by the industry - double as much as in 1980 [1].
it was shown that in 1000 studies done in 2003, the nature of the funding correlated with the results: 80% of industry financed studies had positive results, whereas only 50% of the independent research reported positive findings.
it could be argued that the industry has a natural propensity to identify effective and lucrative therapies. however, the authors show that many impressive results were only obtained because they were compared with weak alternative drugs or placebos. [6]
iii.) b.)
quoted from
“Andrew Wakefield (born 1956 in the United Kingdom) is a Canadian trained surgeon, best known as the lead author of a controversial 1998 research study, published in the Lancet, which reported bowel symptoms in a selected sample of twelve children with autistic spectrum disorders and other disabilities, and alleged a possible connection with MMR vaccination. Citing safety concerns, in a press conference held in conjunction with the release of the report Dr. Wakefield recommended separating the components of the injections by at least a year. The recommendation, along with widespread media coverage of Wakefield’s claims was responsible for a decrease in immunisation rates in the UK. The section of the paper setting out its conclusions, known in the Lancet as the “interpretation” (see the text below), was subsequently retracted by ten of the paper’s thirteen authors.
In February of 2004, controversy resurfaced when Wakefield was accused of a conflict of interest. The London Sunday Times reported that some of the parents of the 12 children in the Lancet study were recruited via a UK attorney preparing a lawsuit against MMR manufacturers, and that the Royal Free Hospital had received £55,000 from the UK’s Legal Aid Board (now the Legal Services Commission) to pay for the research. Previously, in October 2003, the board had cut off public funding for the litigation against MMR manufacturers. Following an investigation of The Sunday Times allegations by the UK General Medical Council, Wakefield was charged with serious professional misconduct, including dishonesty, due to be heard by a disciplinary board in 2007.
In December of 2006, the Sunday Times further reported that in addition to the money given to the Royal Free Hospital, Wakefield had also been personally paid £400,000 which had not been previously disclosed by the attorneys responsible for the MMR lawsuit.”
wakefield had always only expressed his criticism of the combined triple vaccination, supporting single vaccinations spaced in time. the british tv station channel 4 exposed in 2004 that he had applied for patents for the single vaccines. wakefield dropped his subsequent slander action against the media company only in the beginning of 2007. as mentioned, he now awaits charges for professional misconduct. however, he has left britain and now works for a company in austin texas. it has been uncovered that other employees of this us company had received payments from the same attorney preparing the original law suit. [7]
should we be surprised by all of this? next to the innate tendency of human beings to be incompetent and unscrupulous, there is perhaps another level, that makes this whole endeavor special.
the inability of scientist to conclusively and reproducibly uncover findings concerning human beings is maybe better appreciated, if one considers the nature of the subject under study. life, after all, is an enigma and the connection linking the mind to matter is elusive at best (i.e., the physical basis of consciousness).
the bodies capability to heal itself, i.e., the placebo effect and the need for double-blind studies is indeed very bizarre. however, there are studies questioning, if the effect exists at all;-)
taken from
(consult also for the corresponding links for the sources cited below)
[1] This article in the magazine issued by the Neue Zürcher Zeitung by Robert Matthews
[2] C. Schierz; Projekt NEMESIS; ETH Zürich; 2000
[3] A. Chan (Center of Statistics in Medicine, Oxford) et. al.; Journal of the American Medical Association; 2004
[4] J. Ioannidis; “Why Most Published Research Findings Are False” ; University of Ioannina; 2005
[5] R. Matthews, E. García-Berthou and C. Alcaraz as reported in this “Nature” article; 2005
[6] C. Gross (Yale University School of Medicine) et. al.; “Scope and Impact of Financial Conflicts of Interest in Biomedical Research “; Journal of the American Medical Association; 2003
[7] H. Kaulen; “Wie ein Impfstoff zu Unrecht in Misskredit gebracht wurde”; Deutsches Ärzteblatt; Jg. 104; Heft 4; 26. Januar 2007
in a nutshell
Science, put simply, can be understood as working on three levels:
• i.) analyzing the nature of the object being considered/observed,
• ii.) developing the formal representation of the object’s features and its dynamics/interactions,
• iii.) devising methods for the empirical validation of the formal representations.
To be precise, level i.) lies more within the realm of philosophy (e.g., epistemology) and metaphysics (i.e., ontology), as notions of origin, existence and reality appear to transcend the objective and rational capabilities of thought. The main problem being:
“Why is there something rather than nothing? For nothingness is simpler and easier than anything.”; [1].
In the history of science the above mentioned formulation made the understanding of at least three different levels of reality possible:
• a.) the fundamental level of the natural world,
• b.) inherently random phenomena,
• c.) complex systems.
While level a.) deals mainly with the quantum realm and cosmological structures, levels b.) and c.) are comprised mostly of biological, social and economic systems.
a.) Fundamental
Many natural sciences focus on a.i.) fundamental, isolated objects and interactions, use a.ii.) mathematical models which are a.iii.) verified (falsified) in experiments that check the predictions of the model - with great success:
“The enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious. There is no rational explanation for it.”; [2].
b.) Random
Often the nature of the object b.i.) being analyzed is in principle unknown. Only statistical evaluations of sets of outcomes of single observations/experiments can be used to estimate b.ii.) the underlying model, and b.iii.) test it against more empirical data. Often the approach taken in the fields of social sciences, medicine, and business.
c.) Complex
Moving to c.i.) complex, dynamical systems, and c.ii.) employing computer simulations as a template for the dynamical process, unlocks a new level of reality: mainly the complex and interacting world we experience at our macroscopic length scales in the universe. Here two new paradigms emerge:
• the shift from mathematical (analytical) models to algorithmic computations and simulations performed in computers,
• simple rules giving rise to complex behavior: “And I realized, that I had seen a sign of a quite remarkable and unexpected phenomenon: that even from very simple programs behavior of great complexity could emerge.”; [3].
However, things are not as clear anymore. What is the exact methodology, and how does it relate to underlying concepts of ontology and epistemology, and what is the nature of these computations per se? Or within the formulation given above, i.e., iii.c.), what is the “reality” of these models: what do the local rules determining the dynamics in the simulation have to say about the reality of the system c.i.) they are trying to emulate?
There are many coincidences that enabled the structured reality we experience on this planet to have evolve: exact values of fundamental constants (initial conditions), emerging structure-forming and self-organizing processes, the possibility of (organic) matter to store information (after being synthesized in supernovae!), the right conditions of earth for harboring life, the emergent possibilities of neural networks to establish consciousness and sentience above a certain threshold, …
Interestingly, there are also many circumstances that allow the observable world to be understood by the human mind:
• the mystery allowing formal thought systems to map to patterns in the real world,
• the development of the technology allowing for the design and realization of microprocessors,
• the bottom-up approach to complexity identifying a micro level of simple interactions of system elements.
So it appears that the human mind is intimately interwoven with the fabric of reality that produced it.
But where is all this leading to? There exists a natural extension to science which fuses the notions from levels a.) to c.), namely
• information and information processing,
• formal mathematical models,
• statistics and randomness.
Notably, it comes from an engineering point-of-view and deals with quantum computers and comes full circle back to level i.), the question about the nature of reality:
“[It can be shown] that quantum computers can simulate any system that obeys the known laws of physics in a straightforward and efficient way. In fact, the universe is indistinguishable from a quantum computer.”; [4].
At first blush the idea of substituting reality with a computed simulation appears rather ad hoc, but in fact it does have potentially falsifiable notions:
• the discreteness of reality, i.e., the notion that continuity and infinity are not physical,
• the reality of the quantum realm should be contemplated from the point of view of information, i.e., the only relevant reality subatomic quanta manifest is that they register one bit of information: “Information is physical.”; [5].
[1] von Leibniz, G. W., “Principes de la nature et de la grâce”, 1714
[2] Wigner, E. P., “Symmetries and Reflections”, MIT Press, Cambridge, 1967
[3] Wolfram, S., “A New Kind of Science”, Wolfram Media, pg. 19, 2002
[4] Lloyd, S., “Programming the Universe”, Random House, pgs. 53 - 54, 2006
[5] Landauer, R., Nature, 335, 779-784, 1988
See also: “The Mathematical Universe” by M. Tegmark.
Related posts:
See also this post: laws of nature.
what can we know?
Put bluntly, metaphysics asks simple albeit deep questions:
• Why do I exist?
• Why do I die?
• Why does the world exist?
• Where did everything come from?
• What is the nature of reality?
• What is the meaning of existence?
• Is there a creator or omnipotent being?
Although these questions may appear idle and futile, they seem to represent an innate longing for knowledge of the human mind. Indeed, children can and often do pose such questions, only to be faced with resignation or impatience of adults.
To make things simpler and tractable, one can focus on the question “What can we know?”.
When you wake up in the morning, you instantly become aware of your self, i.e., you experience an immaterial inner reality you can feel and probe with your thoughts. Upon opening your eyes, a structured material outer reality appears. These two unsurmountable facts are enough to sketch a small metaphysical diagram:
Focussing on the outer reality or physical universe, there exists an underlying structure forming and selforganizing process starting with an initial singularity or Big Bang (extremely low entropy state, i.e., high order, giving rise to the arrow or direction of time). Due to the exact values of physical constants in our universe, this organizing process yields structures eventually giving birth to stars, which, at the end of their lifecycle, explode (supernovae) allowing for nuclear reactions to fuse heavy elements.
One of these heavy elements brings with it novel bonding possibilities, resulting in a new pattern: organic matter. Within a couple of billion years, the structure forming process gave rise to a plethora of living organisms. Although each organism would die after a short lifespan, the process of life as a whole continued to live in a sustainable equilibrium state and survived a couple of extinction events (some of which eradicated nearly 90% of all species).
The second law of thermodynamics states, that the entropy of the universe is increasing, i.e., the universe is becoming an ever more unordered place. It would seem that the process of life creating stable and ordered structures violates this law. In fact, complex structures spontaneously appear where there is a steady flow of energy from a high temperature input source (the sun) to a low temperature external sink (the earth). So pumping a system with energy leads it to a state far from the thermodynamic equilibrium which is characterized by the emergence of ordered structures.
Viewed from an information processing perspective, the organizing process suddenly experienced a great leap forward. The brains of some organisms had reached a critical mass, allowing for another emergent behavior: consciousness.
The majority of people in industrialized nations take a rational and logicall outlook on life. Although one might think this is an inevitable mode of awareness, it actually is a cultural imprinting as there exist other civilization putting far less emphasis on rationality.
Perhaps the divide between Western and Eastern thinking illustrates this best. Whereas the former is locked in continuous interaction with the outer world, the latter focuses on the experience of an inner reality. A history of meditation techniques underlines this emphasis on the nonverbal experience of ones self. Thought is either totally avoided, or the mind is focused on repetitive activities, in effect deactivating it.
Recall from fundamental that there are two surprising facts to be found. On the one hand, the physical laws dictating the fundamental behavior of the universe can be mirrored by formal thought systems devised by the mind. And on the other hand, real complex behavior can be emulated by computer simulations following simple laws (the computers themselves are an example of technological advances made possible by the successfull modelling of nature by formal thought systems).
This conceptual map allows one to categorize a lot of stuff in a concise manner. Also, the interplay between the outer and inner realities becomes visible. However, the above mentioned questions remain unanswered. Indeed, more puzzles appear. So as usual, every advance in understanding just makes the question mark bigger…
Continued here: infinity?
invariant thinking…
Arguably the most fruitful principle in physics has been the notion of symmetry. Covariance and gauge invariance - two simply stated symmetry conditions - are at the heart of general relativity and the standard model (of particle physics).
This is not only aesthetically pleasing it also illustrates a basic fact: in coding reality into a formal system, we should only allow the most minimal reference to be made to this formal system. I.e. reality likes to be translated into a language that doesn’t explicitly depend on its own peculiarities (coordinates, number bases, units, …). This is a pretty obvious idea and allows for physical laws to be universal.
But what happens if we take this idea to the logical extreme? Will the ultimate theory of reality demand: I will only allow myself to be coded into a formal framework that makes no reference to itself whatsoever. Obviously a mind twister. But the question remains: what is the ultimate symmetry idea? Or: what is the ultimate invariant?
Does this imply “invariance” even with respect to our thinking? How do we construct a system that supports itself out of itself, without relying on anything external? Can such a magical feat be performed by our thinking?
Taken from this newsgroup message
See also: fundamental
While physics has had an amazing success in describing most of the observable universe in the last 300 years, the formalism appears to be restricted to the fundamental workings of nature. Only solid-state physics attempts to deal with collective systems. And only thanks to the magic of symmetry one is able to deduce fundamental analytical solutions.
In order to approach real life complex phenomena, one needs to adopt a more systems oriented focus. This also means that the interactions of entities becomes an integral part of the formalism.
Some ideas should illustrate the situation:
• Most calculations in physics are idealizations and neglect dissipative effects like friction
• Most calculations in physics deal with linear effect, as non-linearity is hard to tackle and is associated with chaos; however, most physical systems in nature are inherently non-linear
• The analytical solution of three gravitating bodies in classical mechanics, given their initial positions, masses, and velocities, cannot be found; it turns out to be a chaotic system which can only be simulated in a computer; however, there are an estimated hundred billion of galaxies in the universe
Systems Thinking
Systems theory is an interdisciplinary field which studies relationships of systems as a whole. The goal is to explain complex systems which consist of a large number of mutually interacting and interwoven parts in terms of those interactions.
A timeline:
• Cybernetics (50s): Study of communication and control, typically involving regulatory feedback, in living organisms and machines
• Catastrophe theory (70s): Phenomena characterized by sudden shifts in behavior arising from small changes in circumstances
• Chaos theory (80s): Describes the behavior of non-linear dynamical systems that under certain conditions exhibit a phenomenon known as chaos (sensitivity to initial conditions, regimes of chaotic and deterministic behavior, fractals, self-similarity)
• Complex adaptive systems (90s): The “new” science of complexity which describes emergence, adaptation and self-organization; employing tools such as agent-based computer simulations
In systems theory one can distinguish between three major hierarchies:
• Suborganic: Fundamental reality, space and time, matter, …
• Organic: Life, evolution, …
• Metaorganic: Consciousness, group dynamical behavior, financial markets, …
However, it is not understood how one can traverse the following chain: bosons and fermions -> atoms -> molecules -> DNA -> cells -> organisms -> brains. I.e., how to understand phenomena like consciousness and life within the context of inanimate matter and fundamental theories.
e.g., systems view
Category Theory
The mathematical theory called category theory is a result of the “unification of mathematics” in the 40s. A category is the most basic structure in mathematics and is a set of objects and a set of morphisms (maps). A functor is a structure-preserving map between categories.
This dynamical systems picture can be linked to the notion of formal systems mentioned above: physical observables are functors, independent of a chosen representation or reference frame, i.e., invariant and covariant.
Object-Oriented Programming
This paradigm of programming can be viewed in a systems framework, where the objects are implementations of classes (collections of properties and functions) interacting via functions (public methods). A programming problem is analyzed in terms of objects and the nature of communication between them. When a program is executed, objects interact with each other by sending messages. The whole system obeys certain rules (encapsulationinheritancepolymorphism, …).
Some advantages of this integral approach to software development:
• Easier to tackle complex problems
• Allows natural evolution towards complexity and better modeling of the real world
• Reusability of concepts (design patterns) and easy modifications and maintenance of existing code
• Object-oriented design has more in common with natural languages than other (i.e., procedural) approaches
Algorithmic vs. Analytical
Perhaps the shift of focus in this new weltbild can be understood best when one considers the paradigm of complex system theory:
• The interaction of entities (agents) in a system according to simple rules gives rise to complex behavior: Emergence, structure-formation, self-organization, adaptive behavior (learning), …
This allows a departure from the equation-based description to models of dynamical processes simulated in computers. This is perhaps the second miracle involving the human mind and the understanding of nature. Not only does nature work on a fundamental level akin to formal systems devised by our brains, the hallmark of complexity appears to be coded in simplicity (”simple sets of rules give complexity”) allowing computational machines to emulate its behavior.
complex systems
It is very interesting to note, that in this paradigm the focus is on the interaction, i.e., the complexity of the agent can be ignored. That is why the formalism works for chemicals in a reaction, ants in an anthill, humans in social or economical organizations, … In addition, one should also note, that simple rules - the epitome of deterministic behavior - can also give rise to chaotic behavior.
The emerging field of network theory (an extension of graph theory,yielding results such as scale-free topologies, small-worlds phenomena, etc. observed in a stunning veriety of complex networks) is also located at this end of the spectrum of the formal descriptions of the workings of nature.
Finally, to revisit the analytical approach to reality, note that in the loop quantum gravity approach, space-time is perceived as a causal network arising from graph updating rules (spin networks, which are graphs associated with group theoretic properties), where particles are envisaged as ‘topological defects’ and geometric properties of reality, such as dimensionality, are defined solely in terms of the network’s connectivity pattern.
list of open questions in complexity theory.
2 Responses to “complex”
1. jbg » Blog Archive » complex networks Says:
[…] The new paradigm states that it is best to understand a complex system, if it is mapped to a network. I.e., the links represent the some kind of interaction and the nodes are stripped of any intrinsic quality. So, as an example, you can forget about the complexity of the individual bird, if you model the flocks swarming behavior. (See these older posts: complex, fundamental, swarm theory, in a nutshell.) […]
2. jbg » Blog Archive » laws of nature Says:
[…] See also these posts: complex, swarm theory, complex networks. […]
What is science?
• Science is the quest to capture the processes of nature in formal mathematical representations
So “math is the blueprint of reality” in the sense that formal systems are the foundation of science.
In a nutshell:
• Natural systems are a subset of reality, i.e., the observable universe
• Guided by thought, observation and measurement natural systems are “encoded” into formal systems
• Using logic (rules of inference) in the formal system, predictions about the natural system can be made (decoding)
• Checking the predictions with the experimental outcome gives the validity of the formal system as a model for the natural system
Physics can be viewed as dealing with the fundamental interactions of inanimate matter.
For a technical overview, go to the here.
math models
• Mathematical models of reality are independent of their formal representation
This leads to the notions of symmetry and invariance. Basically, this requirement gives rise to nearly all of physics.
Classical Mechanics
Symmetry, understood as the invariance of the equations under temporal and spacial transformations, gives rise to the conservation laws of energy, momentum and angular momentum.
In layman terms this means that the outcome of an experiment is unchanged by the time and location of the experiment and the motion of the experimental apparatus. Just common sense…
Mathematics of Symmetry
The intuitive notion of symmetry has been rigorously defined in the mathematical terms of group theory.
Physics of Non-Gravitational Forces
The three non-gravitational forces are described in terms of quantum field theories. These in turn can be expressed as gauge theories, where the parameters of the gauge transformations are local, i.e., differ from point to point in space-time.
The Standard Model of elementary particle physics unites the quantum field theories describing the fundamental interactions of particles in terms of their (gauge) symmetries.
Physics of Gravity
Gravity is the only force that can’t be expressed as a quantum field theory.
Its symmetry principle is called covariance, meaning that in the geometric language of the theory describing gravity (general relativity) the physical content of the equations is unchanged by the choice of the coordinate system used to represent the geometrical entities.
To illustrate, imagine an arrow located in space. It has a length and an orientation. In geometric terms this is a vector, lets call it a. If I want to compute the length of this arrow, I need to choose a coordinate system, which gives me the x-, y- and z-axes components of the vector, e.g., a = (3, 5, 1). So starting from the origin of my coordinate system (0, 0, 0), if I move 3 units in the x direction (left-right), 5 units in the y-direction (forwards-backwards) and 1 unit in the z direction (up-down), I reach the end of my arrow. The problem is now, that depending on the choice of coordinate system - meaning the orientation and the size of the units - the same arrow can look very different: a = (3, 5, 1) = (0, 23.34, -17). However, everytime I compute the length of the arrow in meters, I get the same number independent of the chosen representation.
In general relativity the vectors are somewhat like multidimensional equivalents called tensors and the commonsense requirement, that the calculations involving tensor do not depend on how I represent the tensors in space-time, is covariance.
It is quite amazing, but there is only one more ingredient needed in order to construct one of the most estethic and accurate theories in physics. It is called the equivalence principle and states that the gravitational force is equivalent to the forces experienced during acceleration. This may sound trivial, has however very deep implications.
micr macro math models
Physics of Condensed Matter
This branch of physics, also called solid-state physics, deals with the macroscopic physical properties of matter. It is one of physics first ventures into many-body problems in quantum theory. Although the employed notions of symmetry do not act at such a fundamental level as in the above mentioned theories, they are a cornerstone of the theory. Namely the complexity of the problems can be reduced using symmetry in order for analytical solutions to be found. Technically, the symmetry groups are boundary conditions of the Schrödinger equation. This leads to the theoretical framework describing, for example, semiconductors and quasi-crystals (interestingly, they have fractal properties!). In the superconducting phase, the wave functionbecomes symmetric.
The Success
It is somewhat of a miracle, that the formal systems the human brain discovers/devises find their match in the workings of nature. In fact, there is no reason for this to be the case, other than that it is the way things are.
The following two examples should underline the power of this fact, where new features of reality where discovered solely on the requirements of the mathematical model:
• In order to unify electromagnetism with the weak force (two of the three non-gravitational forces), the theory postulated two new elementary particles: the W and Z bosons. Needless to say, these particles where hitherto unknown and it took 10 years for technology to advance sufficiently in order to allow their discovery.
• The fusion of quantum mechanics and special relativity lead to the Dirac equation which demands the existence of an, up to then, unknown flavor of matter: antimatter. Four years after the formulation of the theory, antimatter was experimentally discovered.
The Future…
Albeit the success, modern physics is still far from being a unified, paradox-free formalism describing all of the observable universe. Perhaps the biggest obstacles lies in the last missing step to unification. In a series of successes, forces appearing as being independent phenomena, turned out to be facets of the same formalism: electricity and magnetism was united in the four Maxwell equations; as mentioned above, electromagnetism and the weak force were merged into the electroweak force; and finally, the electroweak and strong force were united in the framework of the standard model of particle physics. These four forces are all expressed as quantum (field) theories. There is only one observable force left: gravity.
The efforts to quantize gravity and devise a unified theory, have taken a strange turn in the last 20 years. The problem is still unsolved, however, the mathematical formalisms engineered for this quest - namely string/M-theory and loop quantum gravity - have had a twofold impact:
• A new level in the application of formal systems is reached. Whereas before, physics relied on mathematical branches that where developed independently from any physical application (e.g., differential geometry, group theory), string/M-theory is actually spawning new fields of mathematics (namely in topology).
• These theories tell us very strange things about reality:
• Time does not exist on a fundamental level
• Space and time per se become quantized
• Space has more than three dimensions
• Another breed of fundamental particles is needed: supersymmetricmatter
Unfortunately no one knowns if these theories are hinting at a greater reality behind the observable world, or if they are “just” math. The main problem being the fact that any kind of experiment to verify the claims appears to be out of reach of our technology…
4 Responses to “fundamental”
1. jbg » Blog Archive » complex networks Says:
2. jbg » Blog Archive » what can we know? Says:
3. jbg » Blog Archive » in a nutshell Says:
[…] fundamental and complex […]
4. jbg » Blog Archive » laws of nature Says:
[…] See also this post: funadamental, invariant thinking. […] |
4dfe73161a5c4e82 | Orbital hybridisation
From Wikipedia, the free encyclopedia
(Redirected from Orbital hybridization)
Jump to: navigation, search
Not to be confused with s-p mixing in Molecular Orbital theory. See Molecular orbital diagram.
Four sp3 orbitals.
Three sp2 orbitals.
In chemistry, hybridisation (or hybridization) is the concept of mixing atomic orbitals into new hybrid orbitals (with different energies, shapes, etc., than the actual orbitals hybridising) suitable for the pairing of electrons to form chemical bonds in valence bond theory. Hybrid orbitals are very useful in the explanation of molecular geometry and atomic bonding properties. Although sometimes taught together with the valence shell electron-pair repulsion (VSEPR) theory, valence bond and hybridisation are in fact not related to the VSEPR model.1
Historical development
Chemist Linus Pauling first developed the hybridisation theory in order to explain the structure of molecules such as methane (CH4).2 This concept was developed for such simple chemical systems, but the approach was later applied more widely, and today it is considered an effective heuristic for rationalising the structures of organic compounds.
Orbitals are a model representation of the behaviour of electrons within molecules. In the case of simple hybridisation, this approximation is based on atomic orbitals, similar to those obtained for the hydrogen atom, the only neutral atom for which the Schrödinger equation can be solved exactly. In heavier atoms, such as carbon, nitrogen, and oxygen, the atomic orbitals used are the 2s and 2p orbitals, similar to excited state orbitals for hydrogen. Hybrid orbitals are assumed to be mixtures of these atomic orbitals, superimposed on each other in various proportions. It provides a quantum mechanical insight to Lewis structures. Hybridisation theory finds its use mainly in organic chemistry.
spx and sdx terminology
For atoms forming equivalent hybrids with no lone pairs, there is a correspondence to the number and type of orbitals used. For example, sp3 hybrids are formed from one s and three p orbitals. However, in all other cases, there is no such correspondence. The two bond-forming hybrid orbitals of oxygen in water can be described as sp4, which means that they have 20% s character and 80% p character, but does not imply that they are formed from one s and four p orbitals. As a result, the amount of p-character is not restricted to integer values; i.e., hybridisations like sp2.5 are also readily described. For more information see Variable Hybridization.
An analogous notation is used to describe sdx hybrids. For example, the permanganate ion (MnO4) has sd3 hybridisation with orbitals that are 25% s and 75% d.
Types of hybridisation
sp3 hybrids
Hybridisation describes the bonding atoms from an atom's point of view. That is, for a tetrahedrally coordinated carbon (e.g., methane CH4), the carbon should have 4 orbitals with the correct symmetry to bond to the 4 hydrogen atoms.
Carbon's ground state configuration is 1s2 2s2 2px1 2py1 or more easily read:
C ↑↓ ↑↓
1s 2s 2px 2py 2pz
The carbon atom can utilize its two singly occupied p-type orbitals (the designations px py or pz are meaningless at this point, as they do not fill in any particular order), to form two covalent bonds with two hydrogen atoms, yielding the "free radical" methylene CH2, the simplest of the carbenes. The carbon atom can also bond to four hydrogen atoms by an excitation of an electron from the doubly occupied 2s orbital to the empty 2p orbital, so that there are four singly occupied orbitals.
C* ↑↓
1s 2s 2px 2py 2pz
As the additional bond energy more than compensates for the excitation, the formation of four C-H bonds is energetically favoured.
Quantum mechanically, the lowest energy is obtained if the four bonds are equivalent which requires that they be formed from equivalent orbitals on the carbon. To achieve this equivalence, the angular distributions of the orbitals change via a linear combination of the valence-shell (Core orbitals are almost never involved in bonding) s and p wave functions3 to form four sp3 hybrids.
C* ↑↓
1s sp3 sp3 sp3 sp3
In CH4, four sp3 hybrid orbitals are overlapped by hydrogen's 1s orbital, yielding four σ (sigma) bonds (that is, four single covalent bonds) of the same length and strength.
sp2 hybrids
Ethene structure
Other carbon based compounds and other molecules may be explained in a similar way as methane. For example, ethene (C2H4) has a double bond between the carbons.
For this molecule, carbon will sp2 hybridise, because one π (pi) bond is required for the double bond between the carbons, and only three σ bonds are formed per carbon atom. In sp2 hybridisation the 2s orbital is mixed with only two of the three available 2p orbitals:
C* ↑↓
1s sp2 sp2 sp2 2p
forming a total of three sp2 orbitals with one p orbital remaining. In ethylene (ethene) the two carbon atoms form a σ bond by overlapping two sp2 orbitals and each carbon atom forms two covalent bonds with hydrogen by s–sp2 overlap all with 120° angles. The π bond between the carbon atoms perpendicular to the molecular plane is formed by 2p–2p overlap. The hydrogen–carbon bonds are all of equal strength and length, which agrees with experimental data.
sp hybrids
A schematic presentation of hybrid orbitals sp
C* ↑↓
1s sp sp 2p 2p
In this model, the 2s orbital mixes with only one of the three p orbitals resulting in two sp orbitals and two remaining unchanged p orbitals. The chemical bonding in acetylene (ethyne) (C2H2) consists of sp–sp overlap between the two carbon atoms forming a σ bond and two additional π bonds formed by p–p overlap. Each carbon also bonds to hydrogen in a σ s–sp overlap at 180° angles.
Hybridisation and molecule shape
Hybridisation helps to explain molecule shape:
Classification Main group Transition metal4
• Linear (180°)
• sp hybridisation
• E.g., CO2
• Bent (90°)
• sd hybridisation
• E.g., VO2+
• Tetrahedral (109.5°)
• sd3 hybridisation
• E.g., MnO4
AX5 -
AX6 -
Main group compounds with lone pairs
For main group compounds with lone electron pairs, the s orbital lone pair (analogous to s-p mixing in molecular orbital theory) can be hybridised to a certain extent with the bond pairs7 to maximize energetic stability according to its Walsh diagram. This rationalisation is applied to explain deviations in ideal bond angles (i.e. only p orbitals used for bonding), most commonly in second and third period elements.
• Trigonal pyramidal (AX3E1)
• s-orbital can be hybridised with the three p-orbital bonds to give bond angles greater than 90°.
• E.g., NH3
• Bent (AX2E1-2)
• s-orbital lone pair can be hybridised with the two p-orbital bonds to give bond angles greater than 90°. The out-of-plane p-orbital can either be a lone pair or pi bond. If it is a lone pair, it results in inequivalent lone pairs contrary to the common picture depicted by VSEPR theory (see below).
• E.g., SO2, H2O
• Monocoordinate (AX1E1-3)
• s-orbital lone pair can be hybridised with the p-orbital bond. The two out-of-line p-orbitals can either be lone pairs or pi bonds. The p-orbital lone pairs are inequivalent from the s-rich lone pair.
• E.g., CO, SO, HF
Hybridisation of hypervalent molecules
Traditional description
In general chemistry courses and mainstream textbooks, hybridisation is often presented for main group AX5 and above, as well as for transition metal complexes, using the hybridisation scheme first proposed by Pauling.
Classification Main group Transition metal
AX2 -
• Linear (180°)
• sp hybridisation
• E.g., Ag(NH3)2+
AX3 -
AX4 -
• Octahedral (90°)
• d2sp3 hybridisation
• E.g., Mo(CO)6
AX9 -
However, such a scheme is now superseded as more recent calculations based on molecular orbital theory have shown that in main-group molecules the d component is insignificant, while in transition metal complexes the p component is insignificant (see below).
Resonance description
As shown by computational chemistry, hypervalent molecules can only be stable given strongly polar (and weakened) bonds with electronegative ligands such as fluorine or oxygen to reduce the valence electron occupancy of the central atom to below 88 (or 12 for transition metals). This requires an explanation that invokes sigma resonance in addition to hybridisation, which implies that each resonance structure has its own hybridisation scheme. As a guideline, all resonance structures have to obey the octet rule for main group compounds and the dodectet (12) rule for transition metal complexes.
Classification Main group Transition metal
AX2 - Linear (180°)
Di silv.svg
AX3 - Trigonal planar (120°)
Tri copp.svg
AX4 - Square planar (90°)
Tetra plat.svg
AX5 Trigonal bipyramidal (90°, 120°) Trigonal bipyramidal (90°, 120°)
Penta phos.png
• Fractional hybridisation (s and d orbitals)
• E.g., Fe(CO)5
AX6 Octahedral (90°) Octahedral (90°)
Hexa sulf.png Hexa moly.svg
AX7 Pentagonal bipyramidal (90°, 72°) Pentagonal bipyramidal (90°, 72°)
Hepta iodi.svg
• Fractional hybridisation (s and three d orbitals)
• E.g., V(CN)74−
AX8 Square antiprismatic Square antiprismatic
• Fractional hybridisation (s and three p orbitals)
• E.g., IF8
• Fractional hybridisation (s and four d orbitals)
• E.g., Re(CN)83−
AX9 - Tricapped trigonal prismatic
• Fractional hybridisation (s and five d orbitals)
• E.g., ReH92−
Main group compounds with lone pairs
For hypervalent main group compounds with lone electron pairs, the bonding scheme can be split into two components: the "resonant bonding" component and the "regular bonding" component. The "regular bonding" component has the same description (see above), while the "resonant bonding" component consists of resonating bonds utilizing p orbitals. The table below shows how each shape is related to the two components and their respective descriptions.
Regular bonding component (marked in red)
Bent Monocoordinate -
Resonant bonding component Linear axis Seesaw (AX4E1) (90°, 180°, >90°) T-shaped (AX3E2) (90°, 180°) Linear (AX2E3) (180°)
Tetra sulf.svg Tri chlo.svg Di xeno.svg
Square planar equator - Square pyramidal (AX5E1) (90°, 90°) Square planar (AX4E2) (90°)
Penta chlo.svg Tetra xeno.svg
Pentagonal planar equator - Pentagonal pyramidal (AX6E1) (90°, 72°) Pentagonal planar (AX5E2) (72°)
Hexa xeno.svg Penta xeno.svg
Clarifying misconceptions
VSEPR electron domains and hybrid orbitals are different
The simplistic picture of hybridisation taught in conjunction with VSEPR theory does not agree with high-level theoretical calculations7 despite its widespread usage in many textbooks. For example, following the guidelines of VSEPR, the hybridization of the oxygen in water is described with two equivalent lone electron-pairs.9 However, molecular orbital calculations give orbitals that reflect the C2v symmetry of the molecule.10 One of the two lone pairs is in a pure p-type orbital, with its electron density perpendicular to the H–O–H framework.11 The other lone pair is in an approximately sp0.8 orbital that is in the same plane as the H–O–H bonding.11 Photoelectron spectra confirm the presence of two different energies for the nonbonded electrons.12
Non-inclusion of d orbitals in main group compounds
In 1990, Magnusson published a seminal work definitively excluding the role of d-orbital hybridization in bonding in hypervalent compounds of second-row elements. This had long been a point of contention and confusion in describing these molecules using molecular orbital theory. Part of the confusion here originates from the fact that one must include d-functions in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result), and the contribution of the d-function to the molecular wavefunction is large. These facts were historically interpreted to mean that d-orbitals must be involved in bonding. However, Magnusson concludes in his work that d-orbital involvement is not implicated in hypervalency.13
Non-inclusion of p orbitals in transition metal complexes
Similarly, p orbitals have long been thought to be utilized by transition metal centers in bonding with ligands, hence the 18-electron description; however, recent molecular orbital calculations have found that such p orbital participation in bonding is insignificant,1415 even though the contribution of the p-function to the molecular wavefunction is calculated to be somewhat larger than that of the d-function in main group compounds.
Hybridization theory vs. Molecular Orbital theory
Hybridisation theory is an integral part of organic chemistry and in general discussed together with molecular orbital theory in advanced organic chemistry textbooks although for different reasons. One textbook notes that for drawing reaction mechanisms sometimes a classical bonding picture is needed with two atoms sharing two electrons.16 It also comments that predicting bond angles in methane with MO theory is not straightforward. Another textbook treats hybridisation theory when explaining bonding in alkenes17 and a third18 uses MO theory to explain bonding in hydrogen but hybridisation theory for methane.
Bonding orbitals formed from hybrid atomic orbitals may be considered as localized molecular orbitals, which can be formed from the delocalized orbitals of molecular orbital theory by an appropriate mathematical transformation. For molecules with a closed electron shell in the ground state, this transformation of the orbitals leaves the total many-electron wave function unchanged. The hybrid orbital description of the ground state is therefore equivalent to the delocalized orbital description for explaining the ground state total energy and electron density, as well as the molecular geometry which corresponds to the minimum value of the total energy.
There is no such equivalence, however, for ionized or excited states with open electron shells. Hybrid orbitals cannot therefore be used to interpret photoelectron spectra, which measure the energies of ionized states, identified with delocalized orbital energies using Koopmans' theorem. Nor can they be used to interpret UV-visible spectra which correspond to electronic transitions between delocalized orbitals. From a pedagogical perspective, hybridisation approach tends to over-emphasize localisation of bonding electrons and does not effectively embrace molecular symmetry as does MO theory.
See also
1. ^ Gillespie, R.J. (2004), "Teaching molecular geometry with the VSEPR model", Journal of Chemical Education 81 (3): 298–304, Bibcode:2004JChEd..81..298G, doi:10.1021/ed081p298
2. ^ Pauling, L. (1931), "The nature of the chemical bond. Application of results obtained from the quantum mechanics and from a theory of paramagnetic susceptibility to the structure of molecules", Journal of the American Chemical Society 53 (4): 1367–1400, doi:10.1021/ja01355a027
4. ^ Weinhold, Frank; Landis, Clark R. (2005). Valency and bonding: A Natural Bond Orbital Donor-Acceptor Perspective. Cambridge: Cambridge University Press. pp. 381–383. ISBN 978-0-521-83128-4.
6. ^ a b King, R. Bruce (2000). "Atomic orbitals, symmetry, and coordination polyhedra". Coordination Chemistry Reviews 197: 141–168.
7. ^ a b Weinhold, Frank. "Rabbit Ears Hybrids, VSEPR Sterics, and Other Orbital Absurdities". University of Wisconsin. Retrieved 2012-11-11.
8. ^ David L. Cooper , Terry P. Cunningham , Joseph Gerratt , Peter B. Karadakov , Mario Raimondi (1994). "Chemical Bonding to Hypercoordinate Second-Row Atoms: d Orbital Participation versus Democracy". Journal of the American Chemical Society 116 (10): 4414–4426. doi:10.1021/ja00089a033.
9. ^ Petrucci R.H., Harwood W.S. and Herring F.G. "General Chemistry. Principles and Modern Applications" (Prentice-Hall 8th edn 2002) p. 441
10. ^ Levine I.N. “Quantum chemistry” (4th edn, Prentice-Hall) p. 470–2
11. ^ a b Laing, Michael J. Chem. Educ. (1987) 64, 124–128 "No rabbit ears on water. The structure of the water molecule: What should we tell the students?"
12. ^ Levine p. 475
13. ^ E. Magnusson. Hypercoordinate molecules of second-row elements: d functions or d orbitals? J. Am. Chem. Soc. 1990, 112, 7940-7951. doi:10.1021/ja00178a014
14. ^ C. R. Landis, F. Weinhold (2007). "Valence and extra-valence orbitals in main group and transition metal bonding". Journal of Computational Chemistry 28 (1): 198–203. doi:10.1002/jcc.20492.
15. ^ O’Donnell, Mark (2012). "Investigating P-Orbital Character In Transition Metal-to-Ligand Bonding". Brunswick, ME: Bowdoin College. Retrieved 2012-09-16.
External links |
2ce3495d19cfae05 | Cells Cells 2073-4409 MDPI 10.3390/cells3010001 cells-03-00001 Article Macroscopic Quantum-Type Potentials in Theoretical Systems Biology NottaleLaurent CNRS, LUTH, Paris Observatory and Paris-Diderot University, Meudon Cedex 92195, France; E-Mail: laurent.nottale@obspm.fr; Tel.: +33-145-077-403 03 2014 30 12 2013 3 1 1 35 27 07 2013 18 11 2013 28 11 2013 © 2014 by the author; licensee MDPI, Basel, Switzerland. 2014
systems biology relativity fractals
The theory of scale relativity and fractal space-time accounts for a possibly nondifferentiable geometry of the space-time continuum, based on an extension of the principle of relativity to scale transformations of the reference system. Its framework was revealed to be particularly well adapted to a new theoretical approach of systems biology [1,2,3].
This theory was initially built with the goal of re-founding quantum mechanics on prime principles [4,5,6]. The success of this enterprise [7,8] has been completed by obtaining new results: in particular, a generalization of standard quantum mechanics at high energy to new forms of scale laws [9], and the discovery of the possibility of macroscopic quantum-type behavior under certain conditions [10], which may well be achieved in living systems.
This new “macroquantum” mechanics (or “mesoquantum” at, e.g., the cell scale) no longer rests on the microscopic Planck constant ℏ. The parameter, which replaces ℏ is specific to the system under consideration, emerges from self-organization of this system and can now be macroscopic or mesoscopic. This theory is specifically adapted to the description of multi-scale systems capable of spontaneous self-organization and structuration. Two privileged domains of applications are, therefore, astrophysics [6,8,10,11,12,13] and biophysics [1,2,8,14,15].
In this contribution dedicated to applications in biology, after a short reminder of the theory and of its methods and mathematical tools, we develop some aspects which may be relevant to its explicit use for effective biophysical problems. A special emphasis is placed on the concept of macroquantum potential energy. Scale relativity methods are relevant because they provide new mathematical tools to deal with scale-dependent fractal systems, like equations in scale space and scale-dependent derivatives in physical space. This approach is also very appropriate for the study of biological systems because its links micro-scale fractal structures with organized form at the level of an organism.
For more information the interested reader may consult the two detailed papers [1,2] and references therein.
Brief Reminder of the Theory
The theory of scale relativity consists of introducing, in an explicit way, the scale of measurement (or of observation) ε in the (bio-)physical description. These scale variables can be identified, in a theoretical framework, to the differential elements ε = dX, and, in an experimental or observational framework, to the resolution of the measurement apparatus.
The coordinates can now be explicit functions of these variables, X = X(dX) (we omit the indices for simplicity of writing, but the coordinates are in general vectors while the resolution variables are tensors [8], Chapter 3.6). In case of divergence of these functions toward small scales, they are fractal coordinates. The various quantities which describe the system under consideration become themselves fractal functions, F = F[X(dX), dX]. In the simplified case when the fractality of the system is but a consequence of that of space, there is no proper dependence of F in function of dX, and we have merely F = F[X(dX)].
The description of such an explicitly scale dependent system needs three levels instead of two. Usually, one makes a transformation of coordinates XX + dX, then one looks for the effect of this infinitesimal transformation on the system properties, FF + dF. This leads to write differential equations in terms of space-time coordinates.
However, in the new situation, since the coordinates are now scale dependent, one should first state the laws of scale transformation, εε′, then their consequences on the coordinates, X(ε) → X′(ε′) and finally on the various (bio-)physical quantities F[X(ε)] → F′ [X′(ε′)]. One of the main methods of the scale relativity theory consists of describing these scale transformations using differential equations playing in scale space (i.e., the space of the scale variables {ε}). In other words, one considers infinitesimal scale transformations, ln(ε/λ) → ln(ε/λ) + dln(ε/λ), rather than the discrete iterated transformations that have been most often used in the study of fractal objects [16,17,18].
The motion equations in scale relativity are therefore obtained in the framework of a double partial differential calculus acting both in space-time (positions and instants) and in scale space (resolutions), basing oneself on the constraints imposed by the double principle of relativity, of motion and of scale.
Laws of Scale Transformation
The simplest possible scale differential equation which determines the length of a fractal curve (i.e., a fractal coordinate) reads L ln ε = a + b Lwhere / ln ε is the dilation operator [8,9]. Its solution combines a self-similar fractal power-law behavior and a scale-independent contribution: L ( ε ) = L 0 { 1 + ( λ ε ) τ F }where λ is an integration constant and where τF = −b = DF − 1. One easily verifies that the fractal part of this expression agrees with the principle of relativity applied to scales. Indeed, under a transformation εε′, it transforms as = 0(ε′/ε)τF and therefore it depends only on the ratio between scales and not on the individual scales themselves.
This result indicates that, in a general way, fractal functions are the sum of a differentiable part and of a non-differentiable (fractal) part, and that a spontaneous transition is expected to occur between these two behaviors.
On the basis of this elementary solution, generalized scale laws can be naturally obtained by now considering second order differential equations in scale space. This is reminiscent of the jump from the law of inertial motion, dX/dt = V = cst to the fundamental law of dynamics d2X/dt2 = F as concerns motion. The same evolution can be suggested for scale laws: one can jump from scale invariance—possibly broken beyond some transition scale—described by first order differential scale equations, to a “scale dynamics”, involving “scale forces” and second order differential equations.
Many of these generalizations may be relevant in biology, in particular:
log-periodic corrections to power laws: L ( ε ) = a ε ν [ 1 + b cos ( ω ln ε ) ]which is a solution of a second-order differential wave equation in scales.
law of “scale dynamics” involving a constant “scale acceleration”: τ F = 1 G ln ( λ 0 ε ) , ln ( L L 0 ) = 1 2 G ln 2 ( λ 0 ε )This law may be the manifestation of a constant “scale force”, which describes the difference with the free self-similar case (in analogy with Newton's dynamics of motion). In this case the fractal dimension is no longer constant, but varies in a linear way in terms of the logarithm of resolution. Many manifestations of such a behavior have been identified in human and physical geography [19,20].
law of “scale dynamics” involving a scale harmonic oscillator: ln L L 0 = τ 0 ln 2 λ 0 ε ln 2 λ 0 λ 1For ε ≪ λ0 it gives the standard scale invariant case = 00/ε)τ0, i.e., constant fractal dimension DF = 1 + τ0. But its intermediate-scale behavior is particularly interesting, since, owing to the form of the mathematical solution, resolutions larger than a scale λ1 are no longer possible. This new kind of transition therefore separates small scales from large scales, i.e., an “interior” (scales smaller than λ1) from an “exterior” (scales larger than λ1). It is characterized by an effective fractal dimension that becomes formally infinite. This behavior may prove to be particularly interesting for applications to biology, as we shall see in Section 6.
laws of special scale relativity [9]: ln L ( ε ) L 0 = τ 0 ln ( λ 0 / ε ) 1 ln 2 ( λ 0 / ε ) / ln 2 ( λ 0 / λ H ) τ F ( ε ) = τ 0 1 ln 2 ( λ 0 / ε ) / ln 2 ( λ 0 / λ H )This case may not be fully relevant in biology, but we recall it here because it is one of the most profound manifestations of scale relativity. Here the length (i.e., the fractal coordinate) and the ‘djinn’ (variable fractal dimension minus topological dimension) τF = DF − 1 have become the components of a vector in scale space. In this new law of scale transformation, a limiting scale appears, λH, which is impassable and invariant under dilations and contractions, independently of the reference scale λ0. We have identified this invariant scale to the Planck length l P = G / c 3 toward small scales, and to the cosmic length L = 1 / Λ (where Λ is the cosmological constant) toward large scales [6,8,9].
Many other scale laws can be constructed as expressions of Euler-Lagrange equations in scale space, which give the general form expected for these laws [8], Chapter 4.
Laws of Motion
The laws of motion in scale relativity are obtained by writing the fundamental equation of dynamics (which is equivalent to a geodesic equation in the absence of an exterior field) in a fractal space. The non-differentiability and the fractality of coordinates implies at least three consequences [6,8]:
The number of possible paths is infinite. The description therefore naturally becomes non-deterministic and probabilistic. These virtual paths are identified with the geodesics of the fractal space. The ensemble of these paths constitutes a fluid of geodesics, which is therefore characterized by a velocity field.
Each of these paths is itself fractal. The velocity field is therefore a fractal function, explicitly dependent on resolutions and divergent when the scale interval tends to zero (this divergence is the manifestation of non-differentiability).
Moreover, the non-differentiability also implies a two-valuedness of this fractal function, (V+, V). Indeed, two definitions of the velocity field now exist, which are no longer invariant under a transformation |dt| → −|dt| in the non-differentiable case.
These three properties of motion in a fractal space lead to describing the geodesic velocity field in terms of a complex fractal function = (V+ + V)/2 − i(V+V)/2. The (+) and (−) velocity fields can themselves be decomposed in terms of a differentiable part υ± and of a fractal (divergent) fluctuation of zero mean w±, i.e., V± = υ± + w± and therefore the same is true of the full complex velocity field, = (x, y, z, t) + (x, y, z, t, dt).
Jumping to elementary displacements along these geodesics, this reads dX± = d±x + ±, with (in the case of a critical fractal dimension DF = 2 for the geodesics) d ± x = υ ± d t , d ξ ± = ζ ± 2 D | d t | 1 / 2This case is particularly relevant since it corresponds to a Markov-like situation of loss of information from one point to the following, without correlation nor anti-correlation. Here ζ± represents a dimensionless stochastic variable such that <ζ±> = 0 and < ζ ± 2 > = 1. The parameter characterizes the amplitude of fractal fluctuations.
These various effects can be combined under the construction of a total derivative operator [6] : d ^ d t = t + V i D ΔThe fundamental equation of dynamics becomes, in terms of this operator m d ^ d t V = ϕIn the absence of an exterior field ϕ, this is a geodesic equation (i.e., a free inertial Galilean-type equation).
The next step consists of making a change of variable in which one connects the velocity field = ViU to a function ψ according to the relation V = i S 0 m ln ψThe parameter S0 is a constant for the system considered (it identifies to the Planck constant ℏ in standard quantum mechanics). Thanks to this change of variable, the equation of motion can be integrated under the form of a Schrödinger equation [6,8] generalized to a constant different from ℏ, D 2 Δ ψ + i D t ψ ϕ 2 m ψ = 0where the two parameters introduced above, S0 and , are linked by the relation: S 0 = 2 m DIn the case of standard quantum mechanics, S0 = ℏ, so that is a generalization of the Compton length (up to the constant c) and Equation (13) is a generalization of the Compton relation λ C = 2 D c = m cWe obtain the same result by using the full velocity field including the fractal fluctuations of zero mean [8]. This implies the possible existence of fractal solutions for quantum mechanical equations [21,22].
By setting finally ψ = P × e i θ, with V = 2 θ, one can show (see [7,8] and next section) that P = |ψ|2 gives the number density of virtual geodesics. This function becomes naturally a density of probability, or a density of matter or radiation, according to the various conditions of an actual experiment (one particle, many particles or a radiation flow). The function ψ, being solution of the Schrödinger equation and subjected to the Born postulate and to the Compton relation, owns therefore most of the properties of a wave function.
Reversely, the density ρ and the velocity field V of a fluid in potential motion can be combined in terms of a complex function ψ = ρ × e i θ which may become a wave function solution of a Schrödinger equation under some conditions, in particular in the presence of a quantum-type potential (see next section).
Multiple Representations
After this brief summary of the theory (see more details in [8]), let us now consider some of its aspects that may be particularly relevant to applications in biology. One of them is the multiplicity of equivalent representations of the same equations. Usually, classical deterministic equations, quantum equations, stochastic equations, fluid mechanics equations, etc. correspond to different systems and even to different physical laws. But in the scale relativity framework, they are unified as being different representations of the same fundamental equation (the geodesic equation of relativity), subjected to various changes of variable. This is a particularly useful tool in biophysics, which makes often use of diffusion equations of the Fokker-Planck type or of fluid mechanics equations.
Geodesic Representation
The first representation, which can be considered as the root representation, is the geodesic one. The two-valuedness of the velocity field is expressed in this case in terms of the complex velocity field = ViU. It implements what makes the essence of the principle of relativity, i.e., the equation of motion must express the fact that any motion should disappear in the proper system of coordinates: V = 0By deriving this equation with respect to time, it takes the form of a free inertial equation devoid of any force: d ^ d t V = 0where the “covariant” derivative operator /dt includes the terms which account for the effects of the geometry of space-(time). In the case of a fractal space, it reads as we have seen d ^ t = t + V i D Δ
Quantum-Type Representation
We have recalled in the previous section how a wave function ψ can be introduced from the velocity field of geodesics: V = 2 i D ln ψThis mean that the doubling of the velocity field issued from non-differentiability is expressed in this case in terms of the modulus and the phase of this wave function. This allows integration of the equation of motion in the form of a Schrödinger equation, D 2 Δ ψ + i D t ψ ϕ 2 m ψ = 0 By making explicit the modulus and the phase of the wave function, ψ = P × e i θ, where the phase is related to the classical velocity field by the relation V = 2 θ, one can give this equation the form of hydrodynamics equations including a quantum potential. Moreover, it has been recently shown that this transformation is reversible, i.e., by adding a quantum-like potential energy to a classical fluid, it becomes described by a Schrödinger equation and therefore acquires some quantum-type properties [8,23].
Fluid Representation with Macroquantum Potential
It is also possible, as we shall now see, to go directly from the geodesic representation to the fluid representation without writing the Schrödinger equation.
To this purpose, let us express the complex velocity field in terms of the classical (real) velocity field V and of the number density of geodesics PN, which is equivalent as we have seen above to a probability density P: V = V i D ln PThe quantum covariant derivative operator thus reads d ^ t = t + V . i D ( ln P . + Δ )The fundamental equation of dynamics becomes (introducing also an exterior scalar potential ϕ): ( t + V . i D ( ln P . + Δ ) ) ( V i D ln P ) = ϕ mThe imaginary part of this equation, D { ( ln P . + Δ ) V + ( t + V . ) ln P } = 0takes, after some calculations, the following form { 1 P ( t + div ( P V ) ) } = 0and it can finally be integrated in terms of a continuity equation P t + div ( P V ) = 0The real part, ( t + V . ) V = ϕ m + D 2 ( ln P . + Δ ) ln Ptakes the form of an Euler equation, m ( t + V . ) V = ϕ + 2 m D 2 ( Δ P P )and it therefore describes a fluid subjected to an additional quantum-type potential Q = 2 m D 2 Δ P PIt is remarkable that we have obtained this result directly, without passing through a quantum-type representation using a wave function nor through a Schrödinger equation.
The additional “fractal” potential is obtained here as a mere manifestation of the fractal geometry of space, in analogy with Newton's potential emerging as a manifestation of the curved geometry of space-time in Einstein's relativistic theory of gravitation. We have suggested ([8] and references therein) that this geometric energy could contribute to the effects which have been attributed in astrophysics to a missing “dark matter” (knowing that all attempts to directly observe this missing mass have so far failed). Another suggestion, relevant to biology, is that such a potential energy could play an important role in the self-organization and in the morphogenesis of living systems [2,24].
Coupled Two-Fluids
Another equivalent possible representation consists of separating the real and imaginary parts of the complex velocity field, V = V i UOne obtains in this case a system of equations that describe the velocity fields of two fluids strongly coupled together, ( t + V . ) V = ( U . + D Δ ) U ( ϕ m ) ( t + V . ) U = ( U . + D Δ ) VThis representation may be useful in, e.g., numerical simulations of scale relativity/quantum processes [25].
Diffusion-Type Representation
The fundamental two-valuedness which is a consequence of non-differentiability has been initially described in terms of two mean velocity fields υ+ and υ, which transform one into the other by the reflexion |dt| ↔ − |dt|. It is therefore possible to write the equations of motion directly in terms of these two velocity fields. The representation obtained in this way implements the diffusive character of a fractal space and is therefore particularly interesting for biophysical applications. Indeed, one obtains the standard Fokker-Planck equation for the velocity υ+, as for a classical stochastic process: P t + div ( P υ + ) = D Δ Pwhere the parameter plays the role of a diffusion coefficient. On the contrary, the equation obtained for the velocity field υ does not correspond to any classical process: P t + div ( P υ ) = D Δ PThis equation is derived from the geodesic equation on the basis of non-differentiability, but it cannot be set as a founding equation in the framework of a standard diffusion process as was proposed by Nelson [26], since it becomes self-contradictory with the backward Kolmogorov equation generated by such a classical process [10,27,28] and [8], (p. 384).
A New Form of Quantum-Type Potential
However, one may remark that the previous representation is not fully coherent, since it involves three quantities P, υ+ and υ instead of two expected from the velocity doubling. Therefore it should be possible to obtain a system of equations involving only the probability density P and one of the velocity fields, here υ+. To this purpose, one remarks that υ is given in terms of these two quantities by the relation: υ = υ + 2 D ln PWe also recall that V = υ + D ln PThe energy equation now reads E = 1 2 m V 2 + Q + ϕ = 1 2 m ( υ + D ln P ) 2 + Q + ϕwhere the macroquantum potential can be written Q = 2 m D 2 Δ P P = m D 2 { Δ ln P + 1 2 ( ln P ) 2 }One of the terms of this “fractal potential” is therefore compensated while another term appears, so that we obtain: E = 1 2 m υ + 2 + ϕ m D υ + . ln P m D 2 Δ ln PWe finally obtain a new representation in terms of a Fokker-Planck equation, which contains the diffusive term ΔP in addition to the continuity equation obtained in the case of the fluid representation (V, P), and an energy equation which includes a new form of quantum potential: P t + div ( P υ + ) = D Δ P E = 1 2 m υ + 2 + ϕ + Q +where the new quantum-type potential reads Q + = m D ( υ + . ln P + D Δ ln P )It now depends not only on the probability density P, but also on the velocity field υ+.
This derivation is once again reversible. This means that a classical diffusive system described by a standard Fokker-Planck equation which would be subjected to such a generalized quantum-type potential would be spontaneously transformed into a quantum-like system described by a Schrödinger Equation (19) acting on a wave function ψ = P × e i θ where V = 2 θ. Thanks to Equation (35), this wave function is defined in terms of P and υ+ as ψ = P 1 i × e i θ +where υ+ = 2 ∇θ+.
Such a system, although it is initially diffusive, would therefore acquire some quantum-type properties, but evidently not all of them: the behaviors of coherence, inseparability, indistinguishability or entanglement are specific of a combination of quantum laws and elementarity [29] and cannot be recovered in such a context.
This is nevertheless a remarkable result, which means that a partial reversal of diffusion and a transformation of a classical diffusive system into a quantum-type self-organized one should be possible by applying a quantum-like force to this system. This is possible in an actual experiment consisting of a retro-active loop involving continuous measurements, not only of the density [23] but also of the velocity field υ+, followed by a real time application on the system of a classical force FQ+ = −∇Q+ simulating the new macroquantum force [30].
One may also wonder whether living systems, which already work in terms of such a feedback loop (involving sensors, then cognitive processes, then actuators) could have naturally included such kinds of quantum-like potentials in their operation through the selection/evolution process, simply because it provides an enormous evolutionary advantage due to its self-organization and morphogenesis negentropic capabilities [2] and ([8], Chapter 14).
Quantum Potential Reversal
One of the recently obtained results which may be particularly relevant to the understanding of living systems concerns the reversal of the quantum-type potential. What happens when the potential energy keeps exactly the same form, as given by Δ P / P for a given distribution P(x,y,z), while its sign is reversed ? In other words, to what kind of process does the equation ( t + V . ) V = ϕ m 2 D 2 Δ P Pcorrespond?
We have shown [2,8] that such an Euler equation, when it is combined with a continuity equation, can no longer be integrated under the form of a generalized Schrödinger equation. This process is therefore no longer self-organizing. On the contrary, this is a classical diffusive process, characterized by an entropy increase proportional to time.
Indeed, let us start from a Fokker-Planck equation P t + div ( P υ ) = D Δ Pwhich describes a classical diffusion process with diffusion coefficient D. Then make the change of variable V = υ D ln POne finds after some calculations that V and P are now solutions of a continuity equation P t + div ( P υ ) = 0and of an Euler equation which reads ( t + V . ) V = 2 D 2 Δ P PIn other words, we have obtained an hydrodynamical description of a standard diffusion process in terms of a “diffusion potential” which is exactly the reverse of the macroquantum potential.
We have suggested that this behavior may be relevant for the understanding of cancer [2,8] (see also [31] about the relationship between fractal geometry and tumors), since a mere change of sign of the additional potential leads to dramatic consequences: the self-organizing, morphogenetic and structuring character of the system is instantaneously changed to a diffusive, anti-structuring disorganization.
Quantum Potentials in High-Temperature Superconductivity Ginzburg-Landau Non-Linear Schrödinger Equation
The phenomenon of superconductivity is one of the most fascinating of physics. It lies at the heart of a large part of modern physics. Indeed, besides its proper interest for the understanding of condensed matter, it has been used as model for the construction of the electroweak theory through the Higgs field and of other theories in particle physics and in other sciences.
Moreover, superconductivity (SC) has led physicists to deep insights about the nature of matter. It has shown that the ancient view of matter as something “solid”, in other words “material”, was incorrect. The question: “is it possible to walk through walls” is now asked in a different way. Nowadays we know that it is not a property of matter by itself which provides it qualities such as solidity or ability to be crossed, but its interactions.
A first relation of SC with the scale relativity approach can be found in its phenomenological Ginzburg-Landau equation. Indeed, one can recover such a non-linear Schrödinger equation simply by adding a quantum-like potential energy to a standard fluid including a pressure term [23].
Consider indeed an Euler equation with a pressure term and a quantum potential term: ( t + V ) V = ϕ p ρ + 2 D 2 ( Δ ρ ρ )When ∇p/ρ = ∇w is itself a gradient, which is the case of an isentropic fluid, and, more generally, of every cases when there is a state equation which links p and ρ, its combination with the continuity equation can be still integrated in terms of a Schrödinger-type equation [10], D 2 Δ ψ + i D t ψ ϕ + w 2 ψ = 0In the sound approximation, the link between pressure and density writes p p 0 = c s 2 ( ρ ρ 0 ), where cs is the sound speed in the fluid, so that p / ρ = c s 2 ln ρ. Moreover, when ρρ0ρ0, one may use the additional approximation c s 2 ln ρ ( c s 2 / ρ 0 ) ρ, and the equation obtained takes the form of the Ginzburg-Landau equation of superconductivity [32], D 2 Δ ψ + i D t ψ β | ψ | 2 ψ = 1 2 ϕ ψwith β = c s 2 / 2 ρ 0. In the highly compressible case, the dominant pressure term is rather of the form pρ2, so that p/ρρ = |ψ|2, and one still obtains a non-linear Schrodinger equation of the same kind [33].
The intervention of pressure is highly probable in living systems, so that such an equation is expected to be relevant in theoretical systems biology Laboratory experiments aiming at implementing this transformation of a classical fluid into a macroscopic quantum-type fluid are presently under development [30,34].
A Quantum Potential as Origin of Cooper Pairs in HTS?
Another important question concerning SC is that of the microscopic theory which gives rise to such a macroscopic phenomenological behavior.
In superconducting materials, the bounding of electrons in Cooper pairs transforms the electronic gas from a fermionic to a bosonic quantum fluid. The interaction of this fluid with the atoms of the SC material becomes so small that the conducting electrons do not “see” any longer the material. The SC electrons become almost free, all resistance is abolished and one passes from simple conduction to superconduction.
In normal superconductors, the pairing of electrons is a result of their interaction with phonons (see, e.g., [35]). But since 1985, a new form of superconductivity has been discovered which has been named “high temperature superconductivity” (HTS) because the critical temperature, which was of the order of a few kelvins for normal SC, has reached up to 135 K. However, though it has been shown that HTS is still due to the formation of Cooper pairs, the origin of the force that pairs the electrons can no longer be phonons and still remains unknown. Actually, it can be proved that any attractive force between the electrons, as small it could be, would produce their Cooper pairing [36].
Therefore the problem of HTS can be traced back to that of identifying the force that links the electrons. We suggest that this force actually derives from a quantum potential.
Most HTS are copper oxide compounds in which superconductivity arises when they are doped either by extra charges but more often by ‘holes’ (positive charge carrier). Moreover, a systematic electronic inhomogeneity has been reported at the microscopic level, in particular in compounds like Bi2Sr2CaCu2O8+x [37], the local density of states (LDOS) showing ‘hills’ and ‘valley’ of size ∼30 Angstroms, strongly correlated with the SC gap. Actually, the minima of LDOS modulations preferentially occur at the dopant defects [38]. The regions with sharp coherence peaks, usually associated with strong superconductivity, are found to occur between the dopant defect clusters, near which the SC coherence peaks are suppressed.
Basing ourselves on these observations, we have suggested that, at least in this type of compound, the electrons can be trapped in the quantum potential well created by these electronic modulations.
Let us give here a summary of this new proposal. We denote by ψn the wave function of doping charges which have diffused from the initial site of dopant defects, and by ψs the wave function of the fraction of carriers which will be tied in Cooper pairs (only 19%–23% of the total doping induced charge joins the superfluid near optimum doping).
We set ψn = ψs + ψd, where ψd is the wave function of the fraction of charges which do not participate in the superconductivity.
The doping induced charges constitutes a quantum fluid which is expected to be the solution of a Schrödinger equation (here of standard QM, i.e., written in terms of the microscopic Planck's constant ℏ) 2 2 m Δ ψ n + i ψ n t = ϕ ψ nwhere ϕ is a possible external scalar potential, and where we have neglected the magnetic effects as a first step.
Let us separate the two contributions ψs and ψd in this equation. We obtain: 2 2 m Δ ψ s + i ψ s t ϕ ψ s = 2 2 m Δ ψ d i ψ d t + ϕ ψ dWe can now introduce explicitly the probability densities n and the phases θ of the wave functions ψ s = n s × e i θ s and ψ d = n d × e i θ d. The velocity fields of the (s) and (d) quantum fluids are given by Vs = (ℏ/m)∇θs and Vd = (ℏ/m)∇θd. As we have seen above, a Schrödinger equation can be put into the form of fluid mechanics-like equations, its imaginary part becoming a continuity equation and the derivative of its real part becoming a Euler equation with quantum potential. Therefore the above equation can be written as: V s t + V s . V s = ϕ m Q s m ( V d t + V d . V d + Q d m ) n s t + div ( n s V s ) = n d t div ( n d V d )But the (d) part of the quantum fluid, which is not involved in the superconductivity, remains essentially static, so that Vd = 0 and ∂nd/∂t = 0. Therefore we obtain for the quantum fluid (s) a new system of fluid equations: V s t + V s . V s = ϕ m Q s m Q d m n s t + div ( n s V s ) = 0which can be re-integrated under the form of a Schrödinger equation 2 2 m Δ ψ s + i ψ s t ( ϕ + Q d ) ψ s = 0It therefore describes the motion of electrons (s), represented by their wave function ψs, in a potential well given by the exterior potential ϕ, but also by an interior quantum potential Qd which just depends on the local fluctuations of the density nd of charges, Q d = 2 2 m Δ n d n d
Even if in its details this rough model is probably incomplete, we hope this proposal, according to which the quantum potential created by the dopants provides the attractive force needed to link electrons into Cooper pairs, to be globally correct, at least for some of the existing HT superconductors.
Many (up to now) poorly understood features of cuprate HTS can be explained by this model. For example, the quantum potential well involves bound states in which two electrons can be trapped with zero total spin and momentum. One can show that the optimal configuration for obtaining bound states is with 4 dopant defects (oxygen atoms), which bring 8 additional charges. One therefore expects a ratio ns/nn = 2/(8 + 2) = 0.2 at optimal doping. This is precisely the observed value [39], for which, to our knowledge, no explanation existed up to now.
The characteristic size of LDOS wells of ∼30 Angstroms is also easily recovered in this context: the optimal doping being p = 0.155 = 1/6.5, the 8 to 10 charges present in the potential well correspond to a surface (8 − 10) × 6.5 = (52 − 65) = (7.2 − 8.1)2 in units of dCuO = 3.9 Angstroms, i.e., 28–32 Angstroms as observed experimentally.
In this context, the high critical temperature superconductivity would be a geometric multiscale effect. In normal SC, the various elements which permit the superconductivity, Cooper pairing of electrons, formation of a quantum bosonic fluid and coherence of this fluid are simultaneous. In HTS, under the quantum potential hypothesis, these elements would be partly disconnected and related to different structures at different scales (in relation to the connectivity of the potential wells), achieving a multi-scale fractal structure [40].
If confirmed, this would be a nice application of the concept of quantum potentials [41], here in the context of standard microscopic quantum mechanics.
Scale Relativity in Non-differentiable Velocity-Space Analogy between Turbulence and Living Systems
Living systems are well known to exhibit fractal structures from very small scales up to the organism size and even to the size of the collective entities (e.g., a forest made of trees). Therefore it is relevant to assess and quantify these properties with sophisticated models.
Some advanced fractal and multifractal models have been developed in the field of turbulence because fractals are the basic fundamental feature of chaotic fluid dynamics [42]. They have been described since the famous law of Kolmogorov, known as K41 [43]. In the atmosphere, scale laws are observed from micrometers up to thousands of kilometers. Turbulence can be described as flow of energy injected at large scale that cascades into smaller and smaller structures. This process redirects the energy into all directions and it is ultimately dissipated into heat at the smallest scale.
There is therefore a strong analogy with living systems. An investigation of turbulence versus living systems is particularly interesting as there are a number of common points:
Dissipation: both turbulent flows and living systems are dissipative.
Non-isolated: existence of source and sink of energy.
Out of equilibrium.
Existence of stationary structures. Individual “particles” enter and go out in a very complex way, while the overall structure grows (growth of living systems, development of turbulence) then remains stable on a long time scale.
Fundamentally multi-scale and multi-fractal structuring.
Injection of energy at an extreme scale with dissipation at the other (the direction of the multiplicative cascade is reversed in living systems compared to laboratory turbulence). etc.
Application of Scale Relativity to Turbulence
In a recent work, L. de Montera has suggested an original application of the scale relativity theory to the yet unsolved problem of turbulence in fluid mechanics [44]. He has remarked that the Kolmogorov scaling of velocity increments in a Lagrangian description (where one follows an element of fluid, for example thanks to a seeded micro particle [45]), δ υ | δ t | 1 / 2was exactly similar to the fractal fluctuation Equation (8) which is at the basis of the scale relativity description.
The difference is that coordinates remain differentiable, while in this new context velocity becomes non-differentiable, so that accelerations a = δυ/δt ∝ |δt|−1/2 become scale-divergent. Although this power law divergence is clearly limited by the dissipative Kolmogorov small scale, it is nevertheless fairly supported by experimental data, since acceleration of up to 1500 times the acceleration of gravity have been measured in turbulent flows [45,46]).
De Montera's suggestion therefore amounts to apply the scale relativity method after an additional order of differentiation of the equations. The need for such a shift has already been remarked in the framework of stochastic models of turbulence [47,48].
Let us consider here some possible implications of this new, very interesting, proposal.
The necessary conditions which underlie the construction of the scale relativity covariant derivative are very clearly fulfilled for turbulence (now in velocity space):
The chaotic motion of fluid particles implies an infinity of possible paths.
Each of the paths (realizations of which are achieved by test particles of size <100 μm in a Lagrangian approach, [46]) are of fractal dimension DF = 2 in velocity space, at least in the K41 regime (Equation (59)).
The two-valuedness of acceleration is manifested in turbulence data. As remarked by Falkovich et al. [49], the usual statistical tools of description of turbulence (correlation function, second order structure function, etc.) are reversible, while turbulence, being a dissipative process, is fundamentally irreversible. The two-valuedness of derivative is just a way to account for the symmetry breaking under the time scale reflexion δt → −δt. Among the various ways to describe this doubling [50], one of them is particularly adapted to comparison with turbulence data. It consists of remarking that the calculation of a derivative involves a Taylor expansion d X d t = X ( t + d t ) X ( t ) d t = ( X ( t ) + X ( t ) d t + 1 2 X ( t ) d t 2 + ) X ( t ) d tso that one obtains d X d t = X ( t ) + 1 2 X ( t ) d t + For a standard non-fractal function, the contribution 1 2 X ( t ) d t and all the following terms of higher order vanish when dt → 0, so that one recovers the usual result dX/dt = X′(t). But for a fractal function such that its second derivative is scale divergent as X″(t) ∝ 1/dt, the second order term can no longer be neglected and must contribute to the definition of the derivative ([8], Section 3.1).Therefore one may write d + X d t = X ( t ) + 1 2 X ( t ) | d t | , d X d t = X ( t ) 1 2 X ( t ) | d t |then d ^ X d t = d + + d 2 d t X i d + d 2 d t X = X ( t ) i 1 2 X ( t ) | d t |Lagrangian measurements of turbulence data [51,52] confirm this expectation. One finds that the acceleration a = υ′ and its increments da = υ″dt are indeed of the same numerical order: in these data, the dispersions are respectively σa = 280 m/s2 vs σda = 220 m/s2. This fundamental result fully supports the acceleration two-valuedness on an experimental basis.
The dynamics is Newtonian: the equation of dynamics in velocity space is the time derivative of the Navier-Stokes equation, i.e., d a d t = F ˙Langevin-type friction terms may occur in this equation but they do not change the nature of the dynamics. They will simply add a non-linear contribution in the final Schrödinger equation.
The range of scales is large enough for a K41 regime to be established: in von Karman laboratory fully developed turbulence experiments, the ratio between the small dissipative scale and the large (energy injection) scale is larger than 1000 and a K41 regime is actually observed [51].
The application of the scale relativity method is therefore fully supported experimentally in this case. Velocity increments dV can be decomposed into two terms, a classical differentiable one and a fractal fluctuation: d V = d υ + ζ 2 D υ d twhere < ζ > = 0 and < ζ2 >= 1. One recognizes here the K41 scaling in dt1/2. One introduces a complex acceleration field = ai (da/2) and a total ‘covariant’ derivative d ^ d t = d t + A . υ i D υ Δ υand then write a super-dynamics equation d ^ d t A = F ˙A wave function ψ acting in velocity space can be constructed from the acceleration field, A = 2 i D υ υ ln ψand the super-dynamics equation can then be integrated under the form of a Schrödinger equation including possible non-linear terms (NLT) D υ 2 Δ υ ψ + i D υ ψ t = ϕ 2 ψ + NLTwhere ϕ is a potential (in velocity space) from which the force or part of this force derives.
By coming back to a fluid representation—but now in terms of the fluid of potential paths—using as variables P(υ) = lψl2 and a(υ) (which derives from the phase of the wave function), this equation becomes equivalent to the combination of a Navier-Stokes-like equation written in velocity space and a continuity equation, d a d t = F ˙ + 2 D υ 2 υ ( Δ υ P P ) P t + div υ ( P a ) = 0Therefore we have recovered the same equation from which we started (time derivative of Navier-Stokes equation) but a new term has emerged, namely, a quantum-type force which is the gradient of a quantum-type potential in velocity space. One can now re-integrate this equation, and one thus obtains the initial Navier-Stokes equation (in the uncompressible case ρ = 1 and with a viscosity coefficient ν): ( t + υ ) υ = p + ν Δ υ + 2 D υ 2 0 t υ ( Δ υ P P ) d tbut with an additional term which manifests the fractality of the flow in velocity space. The value of υ is directly given, in the K41 regime, by the parameter which commands the whole process, the energy dissipation rate by unit of mass, ε, 2 D υ = C 0 εwhere C0 is Kolmogorov's numerical constant (whose estimations vary from 4 to 9). Concerning the two small scale (dissipative) and large scale (energy injection) transitions, one could include them in a scale varying υ, but a better solution consists of keeping υ constant, then to include the transitions subsequently to the whole process in a global way.
The intervention of such a missing term in developed turbulence is quite possible and is even supported by experimental data. Indeed, precise experimental measurements of one of the numerical constants which characterize the universal scaling of turbulent flows, a 0 = ν 1 / 2 ε 3 / 2 σ a 2, has given constant values around a0 = 6 in the developed turbulence domain Rλ ≥ 500 [46]. However, in the same time, direct numerical simulations (DNS) of Navier-Stokes equations under the same conditions [53,54,55] have systematically given values around a0 = 4, smaller by a factor 2/3.
Let us derive the scale relativity prediction of this constant. We indeed expect an additional contribution to the DNS, since they use the standard NS equations and do not include the new quantum potential.
The considered experiments are van Karman-type flows. The turbulence is generated in a flow of water between counter-rotating disks (with the same opposite rotational velocity) in a cylindrical container [46]. For such experiments the Lagrangian velocity distribution is given with a good approximation by a Gaussian distribution [51,52] centered on υ = 0 (in the laboratory reference system). We can therefore easily calculate the velocity quantum potential. We find for these specific experiments Q υ = D υ 2 ( υ 2 6 σ υ 2 2 σ υ 4 )where σ υ 2 is the velocity variance. Therefore the quantum-like force reads F Q υ = υ Q υ = ( D υ 2 / σ υ 4 ) v, and the additional term in Navier-Stokes equations finally reads, in a reference system whose origin is the center of the cylinder, F Q x = 0 t F Q υ d t = D υ 2 σ υ 4 xwhich is just a repulsive harmonic oscillator force. We therefore expect a new geometric contribution to the acceleration variance: σ a 2 = ( σ a ) c l 2 + D υ 4 σ υ 8 σ x 2Now the parameter υ = C0ε/2 can also, in the K41 regime, be written in function of συ and of the Lagrangian integral time scale TL as D υ = σ υ 2 / T L, while we can take σxL, the Lagrangian length scale, and we obtain the simple expression ( σ a ) c l 2 σ a 2 = 1 L 2 σ a 2 T L 4 .This ratio (l.h.s. of this relation) has been observed to be ≈2/3 by Voth et al. [46] (taking the DNS values for (σa)cl and the experimental ones for σa). The experimental values of L, σa and TL (fitted from the published data) for the same experiments [46] yield values of the r.h.s. that are also around 2/3, a very satisfactory agreement between the theoretical expectation and the experimental result.
For example, in one of the experiments with Rλ = 690, Voth et al. have measured σa = 87 m/s2 and L = 0.071 m [46], while the fitted Lagrangian time scale is found to be TL = 39 ms, so that L / ( σ a T L 2 ) = 0.54 and its square is ≈ 1/3. For the same experiment, (a0)DNS = 4.5 while (a0)exp = 6.2, so that (1 − (a0)DNS/(a0)exp)1/2 = 0.52, very close to the theoretical expectation from the scale relativity correction (0.54).
Although this is not yet a definitive proof of a quantum-like regime in velocity space for developed turbulence (which we shall search in a finer analysis of turbulence data), this adequation is nevertheless a very encouraging result in favor of de Montera's proposal [44].
Let us now give some explicit examples of applications of the scale relativity theory in life sciences, with special emphasis to cases where the cell scale is directly or indirectly concerned.
Actually this theory, through its generalized scale laws and its motions laws that take the form of macroscopic quantum-type laws, allows one to naturally obtain from purely theoretical arguments some functions, characteristics and fundamental processes which are generally considered as specific of living systems. We shall briefly consider the following ones (in a non-exhaustive way): confinement, morphogenesis, spontaneous organization, link to the environment, “quantization”, duplication, branching, (log-periodic) evolution, multi-scale integration (see [1,2,8,14,15] for more details).
Quantization of Structures
Living systems are often characterized by properties of “quantization” and discretization at a very fundamental level. We mean here that they are organized in terms of specific structures having characteristic sizes that are defined in a limited range of scales. The example of cells, which can be considered a kind of “biological quantum”, is the most clear, but this is also true of the cell nucleus, of organs and of organisms themselves for a given species.
This kind of property is naturally expected from the scale relativity approach. Indeed,, the three conditions under which the fundamental equation of dynamics is transformed in a Schroödinger equation (infinity or very large number of potential paths, fractality of these paths and infinitesimal irreversibility) could reasonably be achieved, at least as approximations, in many biological systems.
Such a Schrödinger equation yields stationary and stable solutiöns only for some discretized values of the parameters (energy, momentum, angular momentum, etc.). This remains true of the macroscopic one obtained in scale relativity. These quantized solutions, yielding probability density functions, are solutions of the time-independent equation, written for these particular values of the parameters, in particular of the energy. Now these probability densities define characteristic structures, for example in terms of peaks of probability (see Figure 1). Therefore this property can be viewed as a natural tendency for such a system to structure, and this in a “quantized” way [10].
Example of quantized structures: solutions of a Schrödinger equation for an harmonic oscillator potential.
These structures and their type of quantization appear in dependence of the various limit conditions (in time and space) and environment conditions (presence of forces and fields). This is a very appealing result for biology, since it is clear that not all possible shapes are achieved in nature, but only those corresponding to definite organization bauplans, and that these bauplans appear in relation with the environmental conditions. This is manifest in particular in terms of species punctuated selection and evolution. Some examples will be given in what follows.
Confinement and Cell Wall
As we have seen in the theoretical part of this paper (Section 2.1), one can obtain usual fractal or multifractal scale laws as solutions of first order differential equations acting in the scale space. But these laws can also be generalized to a “scale dynamics” involving second order differential equations. As we have already remarked, this is similar to the passage from inertial laws to Newton's laws of dynamics as concerns motion. Pushing further the analogy, the deviation from a constant fractal dimension (corresponding to scale invariance) can be attributed to the action of a “scale force”.
A particularly interesting application to biology is the case when this force is given by an harmonic oscillator. Indeed, harmonic oscillators appear in a very common way, since they describe the way a system evolves after having been removed from its equilibrium position. But here, the “position” is a scale, which means that, in the case of an attractive oscillator, the system will change its scale in a periodic way. This may yield model of breath/lung dilation and contraction. An interesting feature of such models is that the scale variable is logarihmic, so that the dilation/contraction remains symmetrical only for small deviations from equilibrium, while it becomes disymmetrical for larger ones, as observed in actual situations.
In the case of a repulsive oscillator, one obtains a three-domain system, characterized by a inner and an outer fractal dimension which may be different, separated by a zone at intermediate scales where the fractal dimension diverges (see Figure 2). When one identifies the scale variable as a distance to a center, it describes a system in which has emerged a clear separation between an inner and an outer region, which is one of the properties of the first prokaryotic cell.
A model of cell wall. The figure gives the value of the fractal dimension which is solution of a second order scale differential equation involving a repulsive harmonic oscillator in scale space. One finds a constant fractal dimension in the inner region, a diverging dimension in an intermediate region which may represent a “wall”, then another constant dimension (possibly non-fractal) in the outer region.
Moreover, the zone where the fractal dimension rapidly increases (up to divergence in the mathematical model) corresponds to an increased ‘thickness’ of the material and it can therefore be interpreted as the description of a ‘membrane’. It is indeed the very nature of biological systems to have not only a well-defined size and a well-defined separation between interior and exterior, but also systematically an interface between them, such as membranes or walls. This is already true of the simplest prokaryote living cells. Therefore this result suggests that there could be a connection between the existence of a scale field (for example a pressure), the confinement of the cellular material and the appearance of a limiting membrane or wall [1,8].
This is reminiscent of eukaryotic cellular division which involves both a dissolution of the nucleus membrane and a deconfinement of the nucleus material, transforming, before the division, an eukaryote into a prokaryote-like cell. This could be a key toward a better understanding of the first major evolutionary leap after the appearance of cells, namely the emergence of eukaryotes.
The Schrödinger equation, which is the form taken by the equation of dynamics after integration in scale relativity, can be viewed as a fundamental equation of morphogenesis. It has not been yet considered as such, because its unique domain of application was, up to now, the microscopic domain concerned with molecules, atoms and elementary particles, in which the available information was mainly about energy and momentum.
However, scale relativity extends the potential domain of application of Schrödinger-like equations to every systems in which the three conditions (1) infinite or very large number of trajectories; (2) fractal dimension of individual trajectories; (3) local irreversibility, are fulfilled. Macroscopic Schrödinger equations can be constructed, which are not based on Planck's constant ℏ, but on constants that are specific of each system (and may emerge from their self-organization). In addition, systems which can be described by hydrodynamics equations including a quantum-like potential also come under the generalized macroscopic Schrödinger approach.
The three above conditions seem to be particularly well adapted to the description of living systems. Let us give a simple example of such an application.
In living systems, morphologies are acquired through growth processes. One can attempt to describe such a growth in terms of an infinite family of virtual, fractal and locally irreversible, fluid-like trajectories. Their equation can therefore be written under the form of a fractal geodesic equation, then it can be integrated as a Schrödinger equation or, equivalently, in terms of hydrodynamics-type energy and continuity equations including a quantum-like potential. This last description therefore shares some common points with recent very encouraging works in embryogenesis which describe the embryo growth by visco-elastic fluid mechanics equations [56,57]. The addition of a quantum potential to these equations would give them a Schrödinger form, and therefore would allow the emergence of quantized solutions. This could be an interesting advantage for taking into account the organization of living systems in terms of well defined bauplans [58] and the punctuated evolution of species whose evolutive leaps go from one organization plan to another [59].
Let us take a more detailed example of morphogenesis. If one looks for solutions describing a growth from a center, one finds that this problem is formally identical to the problem of the formation of planetary nebulae, and, from the quantum point of view, to the problem of particle scattering, e.g., on an atom. The solutions correspond to the case of the outgoing spherical probability wave.
Depending on the potential, on the boundary conditions and on the symmetry conditions, a large family of solutions can be obtained. Considering here only the simplest ones, i.e., central potential and spherical symmetry, the probability density distribution of the various possible values of the angles are given in this case by the spherical harmonics, P ( θ , φ ) = | Y l m ( θ , φ ) | 2These functions show peaks of probability for some quantized angles, depending on the quantized values of the square of angular momentum L2 (measured by the quantum number l) and of its projection Lz on axis z (measured by the quantum number m).
Finally the ‘most probable’ morphology is obtained by ‘sending’ matter along angles of maximal probability. The biological constraints leads one to skip to cylindrical symmetry. This yields in the simplest case a periodic quantization of the angle θ (measured by an additional quantum number k), that gives rise to a separation of discretized ‘petals’. Moreover there is a discrete symmetry breaking along the z axis linked to orientation (separation of ‘up’ and ‘down’ due to gravity, growth from a stem). The solutions obtained in this way show floral ‘tulip’-like shapes (see Figure 3 and Figure 4 and [2,15,24]).
Morphogenesis of a ‘flower’-like structure, solution of a Schrödinger equation that describes a growth process from a center (l = 5, m = 0). The ‘petals’, ‘sepals’ and ‘stamen’ are traced along angles of maximal probability density. A constant force of ‘tension’ has been added, involving an additional curvature of ‘petals’, and a quantization of the angle θ that gives an integer number of ‘petals’ (here, k = 5).
Another very interesting feature of quantum-type systems (in the present context of their possible application to biology) is their behavior under a change of energy. Indeed, while the fundamental level solution of a stationary Schrödinger equation describes a single structure, the first excited solution is usually double.
Therefore, the passage from the fundamental (‘vacuum’) level to the first excited level provides us with a (rough) model of duplication/cellular division (see Figure 5, Figure 6 and Figure 7). The quantization of the solutions implies that, in case of energy increase, the system will not increase its size, but will instead be lead to jump from a single structure to a binary structure, with no stable intermediate step between the two stationary solutions n = 0 and n = 1, since the energy of the stationary solutions is itself quantized. Moreover, if one comes back to the scale relativity level of description of individual paths (whose velocity field constitutes the wave function while the equation of dynamics becomes a Schr'ödinger equation), one finds that from each point of the initial one body-structure there exist trajectories that go to the two final structures. In this framework, duplication is expected to be linked to a discretized and precisely fixed jump in energy.
Steps in the “opening” of the flower-like structure of Figure 3. The various shapes are all solutions of a Schrödinger equation derived from the scale relativity equation of dynamics written for a growth process coming from a center. The opening here is just a result of the balance between the action of gravity and the inner force of tension.
Model of duplication. The stationary solutions of the Schrödinger equation in a 3D box can take only discretized morphologies in correspondence with quantized values of the energy. An increase of energy results in a jump from a single structure to a binary structure. No stable solution can exist between the two structures.
Steps of duplication. Stationary solutions of the Schrödinger equation can take only discretized morphologies in correspondence with quantized values of the energy. The successive figures (from top left to bottom right) give different steps of the division process, obtained as solutions of the time-dependent Schödinger equation in an harmonic oscillator potential, which jump from the fundamental level (top left) to the first excited level (bottom right). These extreme solutions are stable (stationary solution of the time-independent Schrödinger equation), while the intermediate solutions are transitory. Therefore it is seen that the system spontaneously jumps from the one structure to the two-structure morphology.
Model of branching and bifurcation. Successive solutions of the time-dependent 2D Schrödinger equation in an harmonic oscillator potential are plotted as isodensities. The energy varies from the fundamental level (n = 0) to the first excited level (n = 1), and, as a consequence, the system jumps from a one-structure to a two-structure morphology.
It is clear that, at this stage, such a model is extremely far from describing the complexity of a true cellular division, which it did not intend to do. Its interest is to be a generic and general model for a spontaneous duplication process of quantized structures, linked to energy jumps. Indeed, the jump from one to two probability peaks when going from the fundamental level to the first excited level is found in many different situations of which the harmonic oscillator and the 3D box cases are only examples. Moreover, this property of spontaneous duplication is expected to be conserved under more elaborated versions of the description provided the asymptotic small scale behavior remains of constant fractal dimension DF ≈ 2, such as, e.g., in cell wall-like models based on a locally increasing effective fractal dimension.
Bifurcation, Branching Process
Such a model can also be applied to a first rough description of a branching process (Figure 7), e.g., in the case of a tree growth when the previous structure remains instead of disappearing as in cell duplication.
Such a model is still clearly too rough to claim that it truly describes biological systems. It is just intended to describe a general, spontaneous functionality. But note that it may be improved and complexified by combining with it and integrating various other functions and processes generated by the scale relativity approach. For example, one may apply the duplication or branching process to a system whose underlying scale laws include (i) a model of membrane—or cell wall—through a fractal dimension that becomes variable with the distance to a center; (ii) a model of multiple hierarchical levels of organization depending on ‘complexergy’ (see below).
Origin of Life: A New Approach
A fundamentally new feature of the scale relativity approach as concerns the question of the origin of life is that the Schrödinger form taken by the geodesic equation can be interpreted as a general tendency for systems to which it applies to make structures, i.e., to naturally lead to self-organization and neguentropy. In the framework of a classical deterministic approach, the question of the formation of a system is always posed in terms of initial conditions. In the new framework, the general existence of stationary solutions allows structures to be formed whatever the initial conditions, in correspondence with the field, the symmetries and the boundary conditions (which become the environmental conditions in biology), and in function of the values of the various conservative quantities that characterize the system.
Such an approach could allow one to ask the question of the origin of life in a renewed way. The emergence of life may be seen as an analog of the ‘vacuum’ (lowest energy) solutions in a quantum-type description, i.e., of the passage from a non-structured medium to the simplest, fundamental level structures. In astrophysics and cosmology, the problem amounts to understand the apparition, from the action of gravitation alone, of structures from a highly homogeneous and non-structured medium. In the standard approach to this problem a large quantity of postulated and unobserved dark matter is needed to form structures, and even with this help the result is dissatisfying. In the scale relativity framework, we have suggested that an underlying fractal geometry of space involves a Schrödinger form for the equation of motion, leading both to a natural tendency to form structures and to the emergence of an additional potential energy which may explain the effects usually attributed to a missing mass [8,10].
The problem of the origin of life, although clearly far more difficult and complex, shows common features with this question of structure formation in cosmology. In both cases one needs to understand the apparition of new structures, functions, properties, etc… from a medium which does not yet show such structures and functions. In other words, one need a theory of emergence. We hope that scale relativity is a good candidate for such a theory, since it owns the two required properties: (i) for problems of origin, it gives the conditions under which a weakly struct ng or destructuring (e.g., diffusive) classical system may become quantum-like and therefore structuring; (ii) for problems of evolution, it makes use of the spontaneous self-organizing property of the quantum-like theory.
We have therefore tentatively suggested a new way to tackle the question of the origin of life (and in parallel, of the present functioning of the intracellular medium) [8,60]. The prebiotic medium on the primordial Earth is expected to have become chaotic. As a consequence, on time scales long with respect to the chaos time (horizon of predictability), the conditions which underlie the transformation of the motion equation into a Schrödinger-type equation become fulfilled (complete information loss on angles, position and time leading to a fractal dimension 2 behavior on a range of scales reaching a ratio of at least 104−105, see ([8], Chapter 10). Since the chemical structures of the prebiotic medium have their lowest scales at the atomic size, this means that, under such a scenario, one expects the first organized units to have appeared at a scale of about 10 μm, which is indeed a typical scale for the first observed prokaryotic cells (see Figure 8).
Schematic illustration of a model of hierarchical organization based on a Schrödinger equation acting in scale space. The fundamental mode corresponds to only one level of hierarchy, while the first and second excited modes describe respectively two, then three embedded hierarchical structures.
The spontaneous transformation of a classical, possibly diffusive mechanics, into a quantum-like mechanics, with the diffusion coefficient becoming the quantum self-organization parameter would have immediate dramatic consequences: quantization of energy and energy exchanges and therefore of information, apparition of shapes and quantization of these shapes (the cells can be considered as the ‘quanta’ of life), spontaneous duplication and branching properties (see following sections), etc… Moreover, due to the existence of a vacuum energy in a quantum-type mechanics (i.e., of a non-vanishing minimal energy for a given system), we expect the primordial structures to appear at a given non-zero energy, without any intermediate step.
In such a framework, the fundamental equation would be the equation of molecular fractal geodesics, which could be transformed into a Schrödinger equation for wave functions ψ. This equation describes a universal tendency to make structures in terms of a probability density P for chemical products (constructed from the distribution of geodesics), given by the squared modulus of the wave function ψ = P × e i θ. Each of the molecules being subjected to this probability (which therefore plays the role of a potentiality), it is proportional to the concentration c for a large number of molecules, Pc.
Finally, the Schrödinger equation may in its turn be transformed into a continuity and Euler hydrodynamic-like system (for the classical velocity V and the probability P) with a macro-quantum potential depending on the concentration when Pc, Q = 2 D 2 Δ c cThis hydrodynamics-like system also implicitly contains as a sub-part a standard diffusion Fokker-Planck equation with diffusion coefficient for the velocity υ+ (see Section 3). It is therefore possible to generalize the standard classical approach of biochemistry which often makes use of fluid equations, with or without diffusion terms (see, e.g., [61,62]).
Under the point of view of this third representation, the spontaneous transformation of a classical system into a quantum-like system through the action of fractality and irreversibility on small time scales manifests itself by the appearance of a quantum-type potential energy in addition to the standard classical energy balance. We have therefore suggested to search whether biological systems are characterized by such an additional potential energy [2]. This missing energy would be given by the above relation (Equation (79)) in terms of concentrations, and could be identified by performing a complete energy balance of biological systems, then by comparing it to the classically expected one.
However, we have also shown that the opposite of a quantum potential is a diffusion potential (Section 3.7). Therefore, in case of simple reversal of the sign of this potential energy, the self-organization properties of this quantum-like behavior would be immediately turned, not only into a weakly organized classical system, but even into an increasing entropy diffusing and disorganized system. We have tentatively suggested [2,8] that such a view may provide a renewed way of approach to the understanding of tumors, which are characterized, among many other features, by both energy affinity and morphological disorganization [63,64].
Nature of First Evolutionary Leaps
Another application of the scale relativity theory consists of applying it in the scale space itself. In this case, one obtains a Schrödinger equation acting in this space, and thus yielding peaks of probability for the scale values themselves. This yields a rough but already predictive model of the emergence of the cell structure and of the value of its typical scales.
Indeed, the three first events of species evolution are the appearance of prokaryot cells (about 3.5 Gyrs in the past), then of eukaryot cells (about 1.7 Gyr), then of the first multicellulars (about 1 Gyr). These three events correspond to three successive steps of organizational hierarchy.
Indeed, at the fundamental (‘vacuum’) level, one can expect the formation of a structure characterized by one length-scale (Figure 8). This particular scale is given by the peak of probability density. This problem is similar to that of a quantum particle in a box (but now it is a “box” in the space of scales), with the logarithms of the minimum scale λm and maximum scale λM playing the roles of the walls of the box. The fundamental level solution is well-known: it is a sinus curve whose peaks of probability lies in the middle of the box and vanishes on its walls. Since the “position” variable is here the logarithm of scales, this means that the fundamental level solution has a peak at a scale λ m × λ M.
What are the minimal and maximal possible scales? From a universal viewpoint, the extremal scales in nature are the Planck-length lP in the microscopic domain and the cosmic scale = Λ−1/2 given by the cosmological constant Λ in the macroscopic domain [8]. From the predicted and now observed value of the cosmological constant, one finds /l = 5.3 × 1060, so that the mid scale of the universe is at 2.3 × 1030 l ≈ 40 μm.
Now, in a purely biological context, one would rather choose the minimal and maximal scales characterizing living systems. These are the atomic scale toward small scales (0.5 Angströms) and the scale of the largest animals like whales (about 10–30 m).It is remarkable that these values yield the same result for the peak of probability of the first structure of life, λ = 40 μm. This value is indeed a typical scale of living cells, in particular of the first ‘prokaryot’ cells appeared more than three Gyrs ago on Earth. Moreover, these prokaryotic first cells are, as described in this simple model, characterized by having only one hierarchical level of organization (monocellulars and no nucleus, see Figure 8).
The second level describes a system with two levels of organization, in agreement with the second step of evolution leading to eukaryots about 1.7 Gyrs ago (second event in Figure 8). One expects (in this very simplified model), that the scale of nuclei be smaller than the scale of prokaryots, itself smaller than the scale of eucaryots: this is indeed what is observed.
The following expected major evolutionary leap is a three organization level system, in agreement with the apparition of multicellular forms (animals, plants and fungi) about 1 Gyr ago (third event in Figure 8). It is also expected that the multicellular stage can be built only from eukaryotes, in agreement with the fact that the cells of multicellulars do have nuclei. More generally, it is noticeable that evolved organisms keep, inside their internal structure, the organization levels of the preceeding stages.
The following major leaps correspond to more complicated structures, then possibly to more complex functions (supporting structures such as exoskeletons, tetrapody, homeothermy, viviparity), but they are still characterized by fundamental changes in the number of organization levels. We also recall that a log-periodic acceleration has been found for the dates of these events [8,14,15,65,66], in agreement with the solutions of a “scale wave equation” (Equation (3)).
The first steps in the above model are based on spherical symmetry, but this symmetry is naturally broken at scales larger than 40 μm, since this is also the scale beyond which the gravitational force becomes larger than the van der Waals force. One therefore expects the evolutionary leaps that follow the apparition of multicellular systems to lead to more complicated structures (such as those of the Precambrian-Cambrian radiation), than can no longer be described by a single scale variable. This increase of complexity will be dealt with by extending this model to more general symmetries, boundary conditions and constraints.
Systems Biology and Multiscale Integration
We hope the scale relativity tools and methods to be also useful in the development of a systems biology framework [1,2,15]. In particular, such an approach would be in agreement with Noble's ‘biological relativity’ [67], according to which there is no privileged scale in living systems [68].
Now, one of the challenges of systems biology is the problem of multiscale integration [69]. The scale relativity theory allows to make new proposals for solving this problem. Indeed, its equations naturally yield solutions that describe multiscale structures, which are therefore spontaneously integrated. Let us illustrate this ability of scale relativity by giving a simple general example of the way the theory can be used to describe multiscale structuring.
The first step consists in defining the elements of description, which represents the smallest scale considered at the studied level. For example, at the cell level, these elementary structures could be intracellular “organelles”.
The second steps amounts to writing for these elementary “objects” an equation of dynamics which account for the fractality and irreversibility of their motion. As we have seen, such a motion equation written in fractal space can be integrated under the form of a macroscopic Schrödinger-type equation. This equation would no longer be based on the microscopic Planck constant, but on a macroscopic constant specific of the system under consideration (this constant can be related, e.g., to a diffusion coefficient). Its solutions are wave functions whose modulus squared gives the probability density of distribution of the initial “points” or elements.
Actually, the solutions of such a Schrödinger-like equation are naturally multiscaled. It describes, in terms of peaks of probability density, a structuring of the “elementary” objects from which we started (e.g., organelle-like objects structuring at the larger scale cell-like level). As we have previously seen, while the vacuum state (lowest energy) usually describes one object (a single “cell”), excited states describe multiple objects (“tissue-like” level), each of which being often separated by zones of null density—therefore corresponding to infinite quantum potentials—which may represent “walls” (Figure 9). An increase of energy spontaneously generates a division of the single structure into a binary one, allowing one to obtain models of the growth of a “tissue” from a single “cell”.
A simple two-complementary-fluid model (describing, e.g., hydrophile/hydrophobe behavior) can easily be obtained, one of them showing probability peaks in the “cells” (and zero probability in the “wall”) while the other fluid peaks in the “walls” and have vanishing probability in the “cells”. This three-level multi-scale structure results from a general theorem of quantum mechanics (which remains true for the macroscopic Schrödinger regime considered here), according to which, for a one-dimensional discrete spectrum, the wave function corresponding to the (n + 1)th eigenvalue is zero n times [70]. A relevant feature for biological applications is also that these multi-scale structures, described in terms of stationary solutions of a Schrödinger-like equation, depend, not on initial conditions like in a classical deterministic approach, but on environment conditions (potential and boundary conditions).
Multiscale integration in scale relativity. Elementary objects—at a given level of description (left figure)—are organized in terms of a finite structure described by a probability density distribution (second figure from the left). By increasing the energy, this structure spontaneously duplicates (third figure). New increases of energy lead to new duplications (fourth figure), then to a “tissue”-like organization (fifth figure—the scale of the figures is not conserved).
Moreover, this scale relativity model involves not only the resulting structures themselves but also the way the system may jump from a two-level to a three-level hierarchical organization. Indeed, the solution of the time-dependent Schrödinger equation describes a spontaneous duplication when the energy of the system jumps from its fundamental state to the first excited state (see Section 6.4 and Figure 5 and Figure 9).
One may even obtain solutions of the same equation organized on more than three levels, since it is known that fractal solutions of the Schrödinger equation do exist [8,21,22]. An example of such a fractal solution for the Schrödinger equation in a two-dimensional box is given in Figure 10.
Fractal multiscale solutions of the Schrödinger equation. Left figure: one-dimensional solution in a box, in terms of position x for a given value of the time t. This solution reads ψ ( x , t ) = ( 1 / π ) n = N n = N ( 1 ) n ( n + 1 / 2 ) 1 exp { i π [ 2 x ( n + 1 / 2 ) t ( n + 1 / 2 ) 2 ] }, with N → ∞ [21]. Finite resolution approximations of this solution can be constructed by taking finite values of N. Here the probability density |ψ|2 is drawn for N = 100 and t = 0.551. Right figure: fractal multiscale solution in a two dimensional box. It is constructed as a product ψ(x)ψ(y) of the one-dimensional solution given in the left figure.
Note that the resulting structures are not only qualitative, but also quantitative, since the relative sizes of the various embedded levels can be derived from the theoretical description. Finally, such a “tissue” of individual “cells” can be inserted in a growth equation which will itself take a Schrödinger form. Its solutions yield a new, larger level of organization, such as the flower-like structure of Figure 3. Then the matching conditions between the small scale and large scale solutions (wave functions) allow to connect the constants of these two equations, and therefore the quantitative scales of their solutions.
The theory of scale relativity, thanks to it accounting for the fractal geometry of a system at a profound level, is particularly adapted to the construction and development of a theoretical biology. In its framework, the description of living systems is no longer strictly deterministic. It supports the use of statistical and probabilistic tools in biology, for example as concerns the expression of genes [71,72].
However, it also suggests to go beyond ordinary probabilities, since the description tool becomes a quantum-like (macroscopic) wave function, which is the solution of a generalized Schrödinger equation. This involves a probability density such that P = |ψ|2, but also phases which are built from the velocity field of potential trajectories and yield possible interferences.
Such a Schrödinger (or non-linear Schrödinger) form of motion equations can be obtained in at least two ways. One way is through the fractality of the biological medium, which is now validated at several scales of living systems, for example in cell walls [73]. Another way is through the emergence of macroscopic quantum-type potentials, which could be an advantageous character acquired from evolution and selection.
In this framework, one therefore expects a fundamentally wave-like, and often quantized, character of numerous processes implemented in living systems. In the present contribution, we have concentrated on the theoretical aspect of the scale relativity approach, then we have given some examples of applications to some biological processes and functions.
Several properties that are considered to be specific to biological systems, such as self-organization, morphogenesis, ability of duplicating, reproducing and branching, confinement, multi-scale structuration and integration are naturally obtained in such an approach [1,2]. The implementation of this type of process in new technological devices involving intelligent feedback loops and quantum-type potentials could also lead to the emergence of a new form of ‘artificial life’.
The author gratefully thanks Philip Turner for his careful reading of the manuscript and for his useful comments.
Conflicts of Interest
The author declares no conflict of interest.
References AuffrayCh.NottaleL.Scale relativity theory and integrative systems biology: 1. Founding principles and scale lawsProg. Biophys. Mol. Biol.2008977911410.1016/j.pbiomolbio.2007.09.00217991512 NottaleL.AuffrayC.Scale relativity theory and integrative systems biology. 2. Macroscopic quantum-type mechanicsProg. Biophys. Mol. Biol.20089711515710.1016/j.pbiomolbio.2007.09.00117991513 NobleD.Claude Bernard, the first systems biologist, and the future of physiologyExp. Physiol.200893162617951329 NottaleL.SchneiderJ.Fractals and non-standard analysisJ. Math. Phys.1984251296130010.1063/1.526285 NottaleL.Fractals and the quantum theory of space-timeInt. J. Mod. Phys. A198945047511710.1142/S0217751X89002156 NottaleL.Fractal Space-Time and Microphysics: Towards a Theory of Scale RelativityWorld ScientificSingapore1993 NottaleL.CélérierM.N.Derivation of the postulates of quantum mechanics from the first principles of scale relativityJ. Phys. A Math. Theor.200740144711449810.1088/1751-8113/40/48/012 NottaleL.Scale Relativity and Fractal Space-Time: A New Approach to Unifying Relativity and Quantum MechanicsImperial College PressLondon, UK2011 NottaleL.The theory of scale relativityInt. J. Mod. Phys. A199274899493610.1142/S0217751X92002222 NottaleL.Scale relativity and quantization of the Universe. I. Theoretical frameworkAstron. Astrophys.1997327867889 NottaleL.Scale-relativity and quantization of extrasolar planetary systemsAstron. Astrophys. Lett.1996315L9L12 NottaleL.SchumacherG.GayJ.Scale relativity and quantization of the solar systemAstron. Astrophys.199732210181025 NottaleL.SchumacherG.LefèvreE.T.Scale relativity and quantization of exoplanet orbital semi-major axesAstron. Astrophys.2000361379387 NottaleL.ChalineJ.GrouP.Les arbres de l'évolution: Univers, Vie, SociétésHachetteParis, France2000p. 379 NottaleL.ChalineJ.GrouP.Des Fleurs Pour Schrödinger: La Relativité D'échelle et Ses ApplicationsEllipsesParis, France2009p. 421 MandelbrotB.Les Objets FractalsFlammarionParis, France1975 MandelbrotB.The Fractal Geometry of NatureFreemanSan Francisco, CA, USA1982 BarnsleyM.Fractals EverywhereAcademic Press Inc.San Diego, CA, USA1988 ForriezM.MartinP.NottaleL.Lois d'échelle et transitions fractal-non fractal en géographieL'Espace Géographique2010297 NottaleL.MartinP.ForriezM.Analyse en relativité d'échelle du bassin versant du Gardon (Gard, France)Rev. Int. Geomat.201222103134 BerryM.V.Quantum fractals in boxesJ. Phys. A Math. Gen.1996296617662910.1088/0305-4470/29/20/016 HallM.J.W.Incompleteness of trajectory-based interpretations of quantum mechanicsJ. Phys. A Math. Gen.200437954910.1088/0305-4470/37/40/015 NottaleL.Generalized quantum potentialsJ. Phys. A Math. Theor.20094227530610.1088/1751-8113/42/27/275306 NottaleL.Relativité d'échelle et morphogenèseRevue de Synthèse20011229311610.1007/BF02990503 HermannR.Numerical simulation of a quantum particle in a boxJ. Phys. A Math. Gen.199730396710.1088/0305-4470/30/11/023 NelsonE.Derivation of the Schrödinger Equation from Newtonian MechanicsPhys. Rev.19661501079108510.1103/PhysRev.150.1079 GrabertH.HänggiP.TalknerP.Is quantum mechanics equivalent to a classical stochastic process?Phys. Rev.1979A1924402445 WangM.S.LiangW.K.Comment on ‘Repeated measurements in stochastic mechanics”Phys. Rev.1993D4818751877 WeisskopfV.La révolution des quantaHachetteParis, France1989 NottaleL.LehnerTh.Numerical simulation of a macro-quantum experiment: oscillating wave packetInt. J. Mod. Phys.2012C231250035 WaliszewskiP.MolskiM.KonarskiJ.On the relationship between fractal geometry of space and time in which a system of interacting cells exists and dynamics of gene expressionActa Biochim. Pol.20014820922011440171 LifchitzE.PitayevskiL.Statistical Physics Part 2Pergamon PressOxford, UK1980 NoreC.BrachetM.E.CerdaE.TirapeguiE.Scattering of first sound by superfluid vorticesPhys. Rev. Lett.1994722593259510.1103/PhysRevLett.72.259310055923 NottaleL.Quantum-Like gravity waves and vortices in a classical fluid. 2009, arXiv: 0901.1270. arXiv.org e-Print archiveAvailable online: http://arxiv.org/pdf/0901.1270v1.pdf(accessed on 1 January 2013) De GennesG.Superconductivity of Metals and AlloysAddison-WesleyNew York, NY, USA1989 LandauL.LifchitzE.Statistical Physics Part 1Pergamon PressOxford, UK1980 PanS.H.O'NealJ.P.BadzeyR.L.ChamonC.DingH.EngelbrechtJ.R.WangZ.EisakiH.UchidaS.GuptaA.K.Microscopic electronic inhomogeneity in the high-Tc superconductor Bi2Sr2CaCu2O8+xNature200141328228510.1038/3509501211565024 McElroyK.LeeJ.SlezakJ.A.LeeD.-H.EisakiH.UchidaS.DavisJ.C.Atomic-scale sources and mechanism of nanoscale electronic disorder in Bi2Sr2CaCu2O8+xScience20053091048105210.1126/science.111309516099978 TannerD.B.LiuH.L.QuijadaM.A.ZiboldA.M.BergerH.KelleyR.J.OnellionM.ChouF.C.JohnstonD.C.RiceJ.P.Superfluid and normal fluid density in high-Tc superconductorsPhysica1998B24418 FratiniM.PocciaN.RicciA.CampiG.BurghammerM.AeppliG.BianconiA.Scale-free structural organization of oxygen interstitials in La2CuO4+yNature201046684184410.1038/nature0926020703301 BohmD.Quantum TheoryConstable and Company Ltd.London, UK1954 FrischU.Turbulence: The Legacy of A.N. KolmogorovCambridge University PressCambridge, UK1995 KolmogorovA.N.Structure of turbulence in an incompressible liquid for very large Reynolds numbers. ProcProc. Acad. Sci. URSS. Geochem. Sect.194130299303 De MonteraL.A theory of turbulence based on scale relativity. 2013, arXiv: 1303.3266Available online: http://arxiv.org/pdf/1303.3266v1.pdf(accessed on 1 April 2013) La PortaA.VothG.A.CrawfordA.M.AlexanderJ.BodenschatzE.Fluid particle accelerations in fully developed turbulenceNature20014091017101910.1038/3505902711234005 VothG.A.La PortaA.CrawfordA.M.AlexanderJ.BodenschatzE.Measurement of particle accelerations in fully developed turbulenceJ. Fluid Mech.2002469121160 BeckC.Superstatistical Turbulence Models. 2005, arXiv: physics/0506123. arXiv.org e-Print archiveAvailable online: http://arxiv.org/pdf/physics/0506123.pdf(accessed on 1 January 2006) SawfordB.L.Reynolds number effects in Lagrangian stochastic models of dispersionPhys. Fluids1991A315771586 FalkovichG.XuH.PumirA.BodenschatzE.BiferaleL.BoffettaG.LanotteA.S.ToschiF.(International Collaboration for Turbulence Research). On Lagrangian single-particle statisticsPhys. Fluids20122405510210.1063/1.4711397 NottaleL.CélérierM.N.Emergence of complex and spinor wave functions in scale relativity. I. Nature of Scale VariablesJ. Math. Phys.201354112102arXiv: 1211.0490. Available online: http://arxiv.org/pdf/1211.0490.pdf(accessed on 1 January 2013) MordantN.MetzP.MichelO.PintonJ.F.Measurement of Lagrangian velocity in fully developed turbulencePhys. Rev. Lett.20018721450110.1103/PhysRevLett.87.21450111736341 MordantN.Mesure lagrangienne en turbulence : mise en uvre et analysePh.D. ThesisEcole Normale Supérieure de LyonLyon, France2001 GotohT.FukayamaD.Pressure spectrum in homogeneous turbulencePhys. Rev. Lett.2001863775377810.1103/PhysRevLett.86.377511329321 IshiharaT.KanedaY.YokokawaM.ItakuraK.UnoA.Small-scale statistics in high-resolution direct numerical simulation of turbulence: Reynolds number dependence of one-point velocity gradient statisticsJ. Fluid Mech.2007592335366 VedulaP.YeungP.K.Similarity scaling of acceleration and pressure statistics in numerical simulations of turbulencePhys. Fluids1999111208122010.1063/1.869893 FleuryV.Clarifying tetrapod embryogenesis, a physicist's point of view. EurPhys. J. Appl. Phys.2009453010110.1051/epjap/2009033 Le NobleF.FleuryV.PriesA.CorvolP.EichmannA.RenemanR.S.Control of arterial branching morphogenesis in embryogenesis: go with the flowCardivasc. Res.20056561962810.1016/j.cardiores.2004.09.018 DevillersC.ChalineJ.Evolution. An Evolving TheorySpringer VerlagNew York, NY, USA1993p. 251 GouldS.J.EldredgeN.Punctuated equilibria; the tempo and mode of evolution reconsideredPaleobiology19773115151 NottaleL.The Theory of scale relativity: Non-differentiable geometry and fractal space-timeAm. Inst. Phys. Conf. Proc.200471868 NobleD.Modeling the heart–from genes to cells to the whole organScience20022951678168210.1126/science.106988111872832 SmithN.P.HunterP.J.PatersonD.J.The Cardiac Physiome: at the heart of coupling models to measurementExp. Physiol.20099446947110.1113/expphysiol.2008.04404019359341 HanahanD.WeinbergR.A.The hallmarks of cancerCell2000100577010.1016/S0092-8674(00)81683-910647931 KroemerG.PouyssegurJ.Tumor cell metabolism: Cancer's Achilles' heelCancer Cell20081347248210.1016/j.ccr.2008.05.00518538731 ChalineJ.NottaleL.GrouP.Is the evolutionary tree a fractal structure?C.R. Acad. Sci. Paris199932871772610.1016/S0764-4442(99)80241-9 CashR.ChalineJ.NottaleL.GrouP.Human development and log-periodic lawsC.R. Biol.200232558559010.1016/S1631-0691(02)01468-312187644 NobleD.The Music of Life: Biology Beyond the GenomeOxford University PressOxford, UK2006 KohlP.NobleD.Systems biology and the virtual physiological humanMol. Syst. Biol.2009529219638973 GavaghanD.GarnyA.MainiP.K.KohlP.Mathematical models in physiologyPhil. Trans. R. Soc. A20063641099110610.1098/rsta.2006.175716608698 LandauL.LifchitzE.Quantum MechanicsMirMoscow, Russia1967 LaforgeB.GuezaD.MartinezM.KupiecJ.J.Modeling embryogenesis and cancer: an approach based on an equilibrium between the autostabilization of stochastic gene expression and the interdependance of cells for proliferationProg. Biophys. Mol. Biol.2005899312010.1016/j.pbiomolbio.2004.11.00415826673 KupiecJ.J.L'origine de l'individuFayardLe Temps des SciencesParis, France2008 TurnerP.KowalczykM.ReynoldsA.New Insights into the Micro-Fibril Architecture of the Wood Cell WallCOST Action E54 BookCOST OfficeBrussels, Belgium2011 |
01c2290a771d66f9 | From Wikipedia, the free encyclopedia
(Redirected from Faster than light travel)
Jump to: navigation, search
"Faster than the speed of light" redirects here. It is not to be confused with Faster Than the Speed of Night.
Faster-than-light (also superluminal or FTL) communication and travel refer to the propagation of information or matter faster than the speed of light. Under the special theory of relativity, a particle (that has rest mass) with subluminal velocity needs infinite energy to accelerate to the speed of light, although special relativity does not forbid the existence of particles that travel faster than light at all times (tachyons).
On the other hand, what some physicists refer to as "apparent" or "effective" FTL[1][2][3][4] depends on the hypothesis that unusually distorted regions of spacetime might permit matter to reach distant locations in less time than light could in normal or undistorted spacetime. Although according to current theories matter is still required to travel subluminally with respect to the locally distorted spacetime region, apparent FTL is not excluded by general relativity.
Examples of FTL proposals are the Alcubierre drive and the traversable wormhole, although their physical plausibility is uncertain.
FTL travel of non-information[edit]
In the context of this article, FTL is the transmission of information or matter faster than c, a constant equal to the speed of light in a vacuum, which is 299,792,458 m/s (by definition) or about 186,282.4 miles per second. This is not quite the same as traveling faster than light, since:
• Some processes propagate faster than c, but cannot carry information (see examples in the sections immediately following).
In the following examples, certain influences may appear to travel faster than light, but they do not convey energy or information faster than light, so they do not violate special relativity.
Daily sky motion[edit]
For an Earthbound observer, objects in the sky complete one revolution around the Earth in 1 day. Proxima Centauri, which is the nearest star outside the solar system, is about 4 light-years away.[5] On a geostationary view, Proxima Centauri has a speed many times greater than c as the rim speed of an object moving in a circle is a product of the radius and angular speed.[5] It is also possible on a geostatic view for objects such as comets to vary their speed from subluminal to superluminal and vice versa simply because the distance from the Earth varies. Comets may have orbits which take them out to more than 1000 AU.[6] The circumference of a circle with a radius of 1000 AU is greater than one light day. In other words, a comet at such a distance is superluminal in a geostatic, and therefore non-inertial, frame.
Light spots and shadows[edit]
If a laser is swept across a distant object, the spot of laser light can easily be made to move across the object at a speed greater than c.[7] Similarly, a shadow projected onto a distant object can be made to move across the object faster than c.[7] In neither case does the light travel from the source to the object faster than c, nor does any information travel faster than light.[7][8][9]
Apparent FTL propagation of static field effects[edit]
Main article: Static field
Since there is no "retardation" (or aberration) of the apparent position of the source of a gravitational or electric static field when the source moves with constant velocity, the static field "effect" may seem at first glance to be "transmitted" faster than the speed of light. However, uniform motion of the static source may be removed with a change in reference frame, causing the direction of the static field to change immediately, at all distances. This is not a change of position which "propagates", and thus this change cannot be used to transmit information from the source. No information or matter can be FTL-transmitted or propagated from source to receiver/observer by an electromagnetic field.
Closing speeds[edit]
The rate at which two objects in motion in a single frame of reference get closer together is called the mutual or closing speed. This may approach twice the speed of light, as in the case of two particles travelling at close to the speed of light in opposite directions with respect to the reference frame.
Imagine two fast-moving particles approaching each other from opposite sides of a particle accelerator of the collider type. The closing speed would be the rate at which the distance between the two particles is decreasing. From the point of view of an observer standing at rest relative to the accelerator, this rate will be slightly less than twice the speed of light.
Special relativity does not prohibit this. It tells us that it is wrong to use Galilean relativity to compute the velocity of one of the particles, as would be measured by an observer traveling alongside the other particle. That is, special relativity gives the right formula for computing such relative velocity.
It is instructive to compute the relative velocity of particles moving at v and -v in accelerator frame, which corresponds to the closing speed of 2v > c. Expressing the speeds in units of c, β = v/c:
\beta_{rel} = { \beta + \beta \over 1 + \beta ^2 } = { 2\beta \over 1 + \beta^2 } \leq 1.
Proper speeds[edit]
If a spaceship travels to a planet one light-year (as measured in the Earth's rest frame) away from Earth at high speed, the time taken to reach that planet could be less than one year as measured by the traveller's clock (although it will always be more than one year as measured by a clock on Earth). The value obtained by dividing the distance traveled, as determined in the Earth's frame, by the time taken, measured by the traveller's clock, is known as a proper speed or a proper velocity. There is no limit on the value of a proper speed as a proper speed does not represent a speed measured in a single inertial frame. A light signal that left the Earth at the same time as the traveller would always get to the destination before the traveller.
How far can one travel from the Earth?[edit]
Since one might not travel faster than light, one might conclude that a human can never travel further from the earth than 40 light-years if the traveler is active between the age of 20 and 60. A traveler would then never be able to reach more than the very few star systems which exist within the limit of 20-40 light-years from the Earth. This is a mistaken conclusion: because of time dilation, the traveler can travel thousands of light-years during their 40 active years. If the spaceship accelerates at a constant 1 g (in its own changing frame of reference), it will, after 354 days, reach speeds a little under the speed of light (for an observer on Earth), and time dilation will increase their lifespan to thousands of Earth years, seen from the reference system of the Solar System, but the traveler's subjective lifespan will not thereby change. If the traveler returns to the Earth, they will land thousands of years into the future. Their speed will not be seen as higher than the speed of light by observers on Earth, and the traveler will not measure their speed as being higher than the speed of light, but will see a length contraction of the universe in their direction of travel. And as the traveler turns around to return, the Earth will seem to experience much more time than the traveler does. So, although their (ordinary) speed cannot exceed c, the four-velocity (distance as seen by Earth divided by their proper, i.e. subjective, time) can be much greater than c. This is seen in statistical studies of muons traveling much further than c times their half-life (at rest), if traveling close to c.[10]
Phase velocities above c[edit]
The phase velocity of an electromagnetic wave, when traveling through a medium, can routinely exceed c, the vacuum velocity of light. For example, this occurs in most glasses at X-ray frequencies.[11] However, the phase velocity of a wave corresponds to the propagation speed of a theoretical single-frequency (purely monochromatic) component of the wave at that frequency. Such a wave component must be infinite in extent and of constant amplitude (otherwise it is not truly monochromatic), and so cannot convey any information.[12] Thus a phase velocity above c does not imply the propagation of signals with a velocity above c.[13]
Group velocities above c[edit]
The group velocity of a wave (e.g., a light beam) may also exceed c in some circumstances.[14] In such cases, which typically at the same time involve rapid attenuation of the intensity, the maximum of the envelope of a pulse may travel with a velocity above c. However, even this situation does not imply the propagation of signals with a velocity above c,[15] even though one may be tempted to associate pulse maxima with signals. The latter association has been shown to be misleading, basically because the information on the arrival of a pulse can be obtained before the pulse maximum arrives. For example, if some mechanism allows the full transmission of the leading part of a pulse while strongly attenuating the pulse maximum and everything behind (distortion), the pulse maximum is effectively shifted forward in time, while the information on the pulse does not come faster than c without this effect.[16]
Universal expansion[edit]
History of the universe - gravitational waves are hypothesized to arise from cosmic inflation, a faster-than-light expansion just after the Big Bang (17 March 2014).[17][18][19]
The expansion of the universe causes distant galaxies to recede from us faster than the speed of light, if proper distance and cosmological time are used to calculate the speeds of these galaxies. However, in general relativity, velocity is a local notion, so velocity calculated using comoving coordinates does not have any simple relation to velocity calculated locally.[20] (See comoving distance for a discussion of different notions of 'velocity' in cosmology.) Rules that apply to relative velocities in special relativity, such as the rule that relative velocities cannot increase past the speed of light, do not apply to relative velocities in comoving coordinates, which are often described in terms of the "expansion of space" between galaxies. This expansion rate is thought to have been at its peak during the inflationary epoch thought to have occurred in a tiny fraction of the second after the Big Bang (models suggest the period would have been from around 10−36 seconds after the Big Bang to around 10−33 seconds), when the universe may have rapidly expanded by a factor of around 1020 to 1030.[21]
There are many galaxies visible in telescopes with red shift numbers of 1.4 or higher. All of these are currently traveling away from us at speeds greater than the speed of light. Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually.[22][23]
"Our effective particle horizon is the cosmic microwave background (CMB), at redshift z ∼ 1100, because we cannot see beyond the surface of last scattering. Although the last scattering surface is not at any fixed comoving coordinate, the current recession velocity of the points from which the CMB was emitted is 3.2c. At the time of emission their speed was 58.1c, assuming (ΩM,ΩΛ) = (0.3,0.7). Thus we routinely observe objects that are receding faster than the speed of light and the Hubble sphere is not a horizon." [[24]]
However, because the expansion of the universe is accelerating, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future,[25] because the light never reaches a point where its "peculiar velocity" towards us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Comoving distance#Uses of the proper distance). The current distance to this cosmological event horizon is about 16 billion light-years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event was less than 16 billion light-years away, but the signal would never reach us if the event was more than 16 billion light-years away.[23]
Astronomical observations[edit]
Apparent superluminal motion is observed in many radio galaxies, blazars, quasars and recently also in microquasars. The effect was predicted before it was observed by Martin Rees[clarification needed] and can be explained as an optical illusion caused by the object partly moving in the direction of the observer,[26] when the speed calculations assume it does not. The phenomenon does not contradict the theory of special relativity. Interestingly, corrected calculations show these objects have velocities close to the speed of light (relative to our reference frame). They are the first examples of large amounts of mass moving at close to the speed of light.[27] Earth-bound laboratories have only been able to accelerate small numbers of elementary particles to such speeds.
Quantum mechanics[edit]
Certain phenomena in quantum mechanics, such as quantum entanglement, might give the superficial impression of allowing communication of information faster than light. According to the no-communication theorem these phenomena do not allow true communication; they only let two observers in different locations see the same system simultaneously, without any way of controlling what either sees. Wavefunction collapse can be viewed as an epiphenomenon of quantum decoherence, which in turn is nothing more than an effect of the underlying local time evolution of the wavefunction of a system and all of its environment. Since the underlying behaviour doesn't violate local causality or allow FTL it follows that neither does the additional effect of wavefunction collapse, whether real or apparent.
The uncertainty principle implies that individual photons may travel for short distances at speeds somewhat faster (or slower) than c, even in a vacuum; this possibility must be taken into account when enumerating Feynman diagrams for a particle interaction.[28] However, it was shown in 2011 that a single photon may not travel faster than c.[29] In quantum mechanics, virtual particles may travel faster than light, and this phenomenon is related to the fact that static field effects (which are mediated by virtual particles in quantum terms) may travel faster than light (see section on static fields above). However, macroscopically these fluctuations average out, so that photons do travel in straight lines over long (i.e., non-quantum) distances, and they do travel at the speed of light on average. Therefore, this does not imply the possibility of superluminal information transmission.
There have been various reports in the popular press of experiments on faster-than-light transmission in optics—most often in the context of a kind of quantum tunnelling phenomenon. Usually, such reports deal with a phase velocity or group velocity faster than the vacuum velocity of light.[citation needed] However, as stated above, a superluminal phase velocity cannot be used for faster-than-light transmission of information. There has sometimes been confusion concerning the latter point. Additionally, a channel that permits such propagation cannot be laid out faster than the speed of light.[citation needed]
Hartman effect[edit]
Main article: Hartman effect
The Hartman effect is the tunnelling effect through a barrier where the tunnelling time tends to a constant for large barriers.[30] This was first described by Thomas Hartman in 1962.[31] This could, for instance, be the gap between two prisms. When the prisms are in contact, the light passes straight through, but when there is a gap, the light is refracted. There is a nonzero probability that the photon will tunnel across the gap rather than follow the refracted path. For large gaps between the prisms the tunnelling time approaches a constant and thus the photons appear to have crossed with a superluminal speed.[32]
However, an analysis by Herbert G. Winful from the University of Michigan suggests that the Hartman effect cannot actually be used to violate relativity by transmitting signals faster than c, because the tunnelling time "should not be linked to a velocity since evanescent waves do not propagate".[33] The evanescent waves in the Hartman effect are due to virtual particles and a non-propagating static field, as mentioned in the sections above for gravity and electromagnetism.
Casimir effect[edit]
Main article: Casimir effect
In physics, the Casimir effect or Casimir-Polder force is a physical force exerted between separate objects due to resonance of vacuum energy in the intervening space between the objects. This is sometimes described in terms of virtual particles interacting with the objects, owing to the mathematical form of one possible way of calculating the strength of the effect. Because the strength of the force falls off rapidly with distance, it is only measurable when the distance between the objects is extremely small. Because the effect is due to virtual particles mediating a static field effect, it is subject to the comments about static fields discussed above.
EPR paradox[edit]
Main article: EPR paradox
The EPR paradox refers to a famous thought experiment of Einstein, Podolski and Rosen that was realized experimentally for the first time by Alain Aspect in 1981 and 1982 in the Aspect experiment. In this experiment, the measurement of the state of one of the quantum systems of an entangled pair apparently instantaneously forces the other system (which may be distant) to be measured in the complementary state. However, no information can be transmitted this way; the answer to whether or not the measurement actually affects the other quantum system comes down to which interpretation of quantum mechanics one subscribes to.
An experiment performed in 1997 by Nicolas Gisin at the University of Geneva has demonstrated non-local quantum correlations between particles separated by over 10 kilometers.[34] But as noted earlier, the non-local correlations seen in entanglement cannot actually be used to transmit classical information faster than light, so that relativistic causality is preserved; see no-communication theorem for further information. A 2008 quantum physics experiment also performed by Nicolas Gisin and his colleagues in Geneva, Switzerland has determined that in any hypothetical non-local hidden-variables theory the speed of the quantum non-local connection (what Einstein called "spooky action at a distance") is at least 10,000 times the speed of light.[35]
Delayed choice quantum eraser[edit]
Delayed choice quantum eraser (an experiment of Marlan Scully) is a version of the EPR paradox in which the observation or not of interference after the passage of a photon through a double slit experiment depends on the conditions of observation of a second photon entangled with the first. The characteristic of this experiment is that the observation of the second photon can take place at a later time than the observation of the first photon,[36] which may give the impression that the measurement of the later photons "retroactively" determines whether the earlier photons show interference or not, although the interference pattern can only be seen by correlating the measurements of both members of every pair and so it can't be observed until both photons have been measured, ensuring that an experimenter watching only the photons going through the slit does not obtain information about the other photons in an FTL or backwards-in-time manner.[37][38]
FTL communication[edit]
• The relativistic momentum of a massive particle would increase with speed in such a way that at the speed of light an object would have infinite momentum.
• Either way, such acceleration requires infinite energy.
• Some observers with sub-light relative motion will disagree about which occurs first of any two events that are separated by a space-like interval.[39] In other words, any travel that is faster-than-light will be seen as traveling backwards in time in some other, equally valid, frames of reference,[40] or need to assume the speculative hypothesis of possible Lorentz violations at a presently unobserved scale (for instance the Planck scale).[citation needed] Therefore any theory which permits "true" FTL also has to cope with time travel and all its associated paradoxes,[41] or else to assume the Lorentz invariance to be a symmetry of thermodynamical statistical nature (hence a symmetry broken at some presently unobserved scale).
• In special relativity the coordinate speed of light is only guaranteed to be c in an inertial frame; in a non-inertial frame the coordinate speed may be different from c.[42] in general relativity no coordinate system on a large region of curved spacetime is "inertial", so it's permissible to use a global coordinate system where objects travel faster than c, but in the local neighborhood of any point in curved spacetime we can define a "local inertial frame" and the local speed of light will be c in this frame,[43] with massive objects moving through this local neighborhood always having a speed less than c in the local inertial frame.
Faster light (Casimir vacuum and quantum tunnelling)[edit]
Einstein's equations of special relativity postulate that the speed of light in a vacuum is invariant in inertial frames. That is, it will be the same from any frame of reference moving at a constant speed. The equations do not specify any particular value for the speed of the light, which is an experimentally determined quantity for a fixed unit of length. Since 1983, the SI unit of length (the meter) has been defined using the speed of light.
The experimental determination has been made in vacuum. However, the vacuum we know is not the only possible vacuum which can exist. The vacuum has energy associated with it, called simply the vacuum energy, which could perhaps be altered in certain cases.[44] When vacuum energy is lowered, light itself has been predicted to go faster than the standard value c. This is known as the Scharnhorst effect. Such a vacuum can be produced by bringing two perfectly smooth metal plates together at near atomic diameter spacing. It is called a Casimir vacuum. Calculations imply that light will go faster in such a vacuum by a minuscule amount: a photon traveling between two plates that are 1 micrometer apart would increase the photon's speed by only about one part in 1036.[45] Accordingly there has as yet been no experimental verification of the prediction. A recent analysis[46] argued that the Scharnhorst effect cannot be used to send information backwards in time with a single set of plates since the plates' rest frame would define a "preferred frame" for FTL signalling. However, with multiple pairs of plates in motion relative to one another the authors noted that they had no arguments that could "guarantee the total absence of causality violations", and invoked Hawking's speculative chronology protection conjecture which suggests that feedback loops of virtual particles would create "uncontrollable singularities in the renormalized quantum stress-energy" on the boundary of any potential time machine, and thus would require a theory of quantum gravity to fully analyze. Other authors argue that Scharnhorst's original analysis, which seemed to show the possibility of faster-than-c signals, involved approximations which may be incorrect, so that it is not clear whether this effect could actually increase signal speed at all.[47]
The physicists Günter Nimtz and Alfons Stahlhofen, of the University of Cologne, claim to have violated relativity experimentally by transmitting photons faster than the speed of light.[32] They say they have conducted an experiment in which microwave photons—relatively low energy packets of light—travelled "instantaneously" between a pair of prisms that had been moved up to 3 ft (1 m) apart. Their experiment involved an optical phenomenon known as "evanescent modes", and they claim that since evanescent modes have an imaginary wave number, they represent a "mathematical analogy" to quantum tunnelling.[32] Nimtz has also claimed that "evanescent modes are not fully describable by the Maxwell equations and quantum mechanics have to be taken into consideration."[48] Other scientists such as Herbert G. Winful and Robert Helling have argued that in fact there is nothing quantum-mechanical about Nimtz's experiments, and that the results can be fully predicted by the equations of classical electromagnetism (Maxwell's equations).[49][50]
Herbert G. Winful argues that the train analogy is a variant of the "reshaping argument" for superluminal tunneling velocities, but he goes on to say that this argument is not actually supported by experiment or simulations, which actually show that the transmitted pulse has the same length and shape as the incident pulse.[49] Instead, Winful argues that the group delay in tunneling is not actually the transit time for the pulse (whose spatial length must be greater than the barrier length in order for its spectrum to be narrow enough to allow tunneling), but is instead the lifetime of the energy stored in a standing wave which forms inside the barrier. Since the stored energy in the barrier is less than the energy stored in a barrier-free region of the same length due to destructive interference, the group delay for the energy to escape the barrier region is shorter than it would be in free space, which according to Winful is the explanation for apparently superluminal tunneling.[52][53]
A number of authors have published papers disputing Nimtz's claim that Einstein causality is violated by his experiments, and there are many other papers in the literature discussing why quantum tunneling is not thought to violate causality.[54]
It was later claimed by the Keller group in Switzerland that particle tunneling does indeed occur in zero real time. Their tests involved tunneling electrons, where the group argued a relativistic prediction for tunneling time should be 500-600 attoseconds (an attosecond is one quintillionth (10−18) of a second). All that could be measured was 24 attoseconds, which is the limit of the test accuracy.[55] Again, though, other physicists believe that tunneling experiments in which particles appear to spend anomalously short times inside the barrier are in fact fully compatible with relativity, although there is disagreement about whether the explanation involves reshaping of the wave packet or other effects.[52][53][56]
Give up (absolute) relativity[edit]
Because of the strong empirical support for special relativity, any modifications to it must necessarily be quite subtle and difficult to measure. The best-known attempt is doubly special relativity, which posits that the Planck length is also the same in all reference frames, and is associated with the work of Giovanni Amelino-Camelia and João Magueijo. One consequence of this theory is a variable speed of light, where photon speed would vary with energy, and some zero-mass particles might possibly travel faster than c.[citation needed] However, even if this theory is accurate, it is still very unclear whether it would allow information to be communicated, and appears not in any case to allow massive particles to exceed c.
Space-time distortion[edit]
Heim theory[edit]
In 1977, a paper on Heim theory theorized that it may be possible to travel faster than light by using magnetic fields to enter a higher-dimensional space.[59]
MiHsC/Quantised inertia[edit]
A new theory has been proposed that Modifies inertia by assuming it is due to Unruh radiation subject to a Hubble scale Casimir effect (MiHsC, or quantised inertia). MiHsC predicts a minimum possible acceleration even at light speed, implying that this speed can be exceeded. [60]
Lorentz symmetry violation[edit]
The possibility that Lorentz symmetry may be violated has been seriously considered in the last two decades, particularly after the development of a realistic effective field theory that describes this possible violation, the so-called Standard-Model Extension.[61][62][63] This general framework has allowed experimental searches by ultra-high energy cosmic-ray experiments[64] and a wide variety of experiments in gravity, electrons, protons, neutrons, neutrinos, mesons, and photons.[65] The breaking of rotation and boost invariance causes direction dependence in the theory as well as unconventional energy dependence that introduces novel effects, including Lorentz-violating neutrino oscillations and modifications to the dispersion relations of different particle species, which naturally could make particles move faster than light.
In some models of broken Lorentz symmetry, it is postulated that the symmetry is still built into the most fundamental laws of physics, but that spontaneous symmetry breaking of Lorentz invariance[66] shortly after the Big Bang could have left a "relic field" throughout the universe which causes particles to behave differently depending on their velocity relative to the field;[67] however, there are also some models where Lorentz symmetry is broken in a more fundamental way. If Lorentz symmetry can cease to be a fundamental symmetry at Planck scale or at some other fundamental scale, it is conceivable that particles with a critical speed different from the speed of light be the ultimate constituents of matter.
In current models of Lorentz symmetry violation, the phenomenological parameters are expected to be energy-dependent. Therefore, as widely recognized,[68][69] existing low-energy bounds cannot be applied to high-energy phenomena; however, many searches for Lorentz violation at high energies have been carried out using the Standard-Model Extension.[65] Lorentz symmetry violation is expected to become stronger as one gets closer to the fundamental scale.
Another recent theory (see EPR paradox above) resulting from the analysis of an EPR communication set up, has the simple device based on removing the effective retarded time terms in the Lorentz transform to yield a preferred absolute reference frame.[70][71] This frame cannot be used to do physics (i.e., compute the influence of light-speed limited signals) but it provides an objective, absolute frame all could agree upon, if superluminal communication is possible. If this sounds indulgent, it allows simultaneity, absolute space and time and a deterministic universe (along with decoherence theory) whilst the status-quo permits time travel/causality paradoxes, subjectivity in the measurement process and multiple universes.
Superfluid theories of physical vacuum[edit]
Main article: Superfluid vacuum
In this approach the physical vacuum is viewed as the quantum superfluid which is essentially non-relativistic whereas the Lorentz symmetry is not an exact symmetry of nature but rather the approximate description valid only for the small fluctuations of the superfluid background.[72] Within the framework of the approach a theory was proposed in which the physical vacuum is conjectured to be the quantum Bose liquid whose ground-state wavefunction is described by the logarithmic Schrödinger equation. It was shown that the relativistic gravitational interaction arises as the small-amplitude collective excitation mode[73] whereas relativistic elementary particles can be described by the particle-like modes in the limit of low momenta.[74] The important fact is that at very high velocities the behavior of the particle-like modes becomes distinct from the relativistic one - they can reach the speed of light limit at finite energy; also, faster-than-light propagation is possible without requiring moving objects to have imaginary mass.[75][76]
Time of flight of neutrinos[edit]
MINOS experiment[edit]
Main article: MINOS
In 2007 the MINOS collaboration reported results measuring the flight-time of 3 GeV neutrinos yielding a speed exceeding that of light by 1.8-sigma significance.[77] However, those measurements were considered to be statistically consistent with neutrinos traveling at the speed of light.[78] After the detectors for the project were upgraded in 2012, MINOS corrected their initial result and found agreement with the speed of light. Further measurements are going to be conducted.[79]
OPERA neutrino anomaly[edit]
On September 22, 2011, a paper[80] from the OPERA Collaboration indicated detection of 17 and 28 GeV muon neutrinos, sent 730 kilometers (454 miles) from CERN near Geneva, Switzerland to the Gran Sasso National Laboratory in Italy, traveling faster than light by a factor of 2.48×10−5 (approximately 1 in 40,000), a statistic with 6.0-sigma significance.[81] On 18 November 2011, a second follow-up experiment by OPERA scientists confirmed their initial results.[82][83] However, scientists were skeptical about the results of these experiments, the significance of which was disputed.[84] In March 2012, the ICARUS collaboration failed to reproduce the OPERA results with their equipment, detecting neutrino travel time from CERN to the Gran Sasso National Laboratory indistinguishable from the speed of light.[85] Later the OPERA team reported two flaws in their equipment set-up that had caused errors far outside their original confidence interval: a fiber optic cable attached improperly, which caused the apparently faster-than-light measurements, and a clock oscillator ticking too fast.[86]
Main article: Tachyon
In special relativity, it is impossible to accelerate an object to the speed of light, or for a massive object to move at the speed of light. However, it might be possible for an object to exist which always moves faster than light. The hypothetical elementary particles with this property are called tachyonic particles. Attempts to quantize them failed to produce faster-than-light particles, and instead illustrated that their presence leads to an instability.[87][88]
Various theorists have suggested that the neutrino might have a tachyonic nature,[89][90][91][92][93] while others have disputed the possibility.[94]
General relativity[edit]
General relativity was developed after special relativity to include concepts like gravity. It maintains the principle that no object can accelerate to the speed of light in the reference frame of any coincident observer.[citation needed][clarification needed] However, it permits distortions in spacetime that allow an object to move faster than light from the point of view of a distant observer.[citation needed][clarification needed] One such distortion is the Alcubierre drive, which can be thought of as producing a ripple in spacetime that carries an object along with it. Another possible system is the wormhole, which connects two distant locations as though by a shortcut. Both distortions would need to create a very strong curvature in a highly localized region of space-time and their gravity fields would be immense. To counteract the unstable nature, and prevent the distortions from collapsing under their own 'weight', one would need to introduce hypothetical exotic matter or negative energy.
General relativity also recognizes that any means of faster-than-light travel could also be used for time travel. This raises problems with causality. Many physicists believe that the above phenomena are impossible and that future theories of gravity will prohibit them. One theory states that stable wormholes are possible, but that any attempt to use a network of wormholes to violate causality would result in their decay.[citation needed] In string theory, Eric G. Gimon and Petr Hořava have argued[95] that in a supersymmetric five-dimensional Gödel universe, quantum corrections to general relativity effectively cut off regions of spacetime with causality-violating closed timelike curves. In particular, in the quantum theory a smeared supertube is present that cuts the spacetime in such a way that, although in the full spacetime a closed timelike curve passed through every point, no complete curves exist on the interior region bounded by the tube.
Variable speed of light[edit]
In physics, the speed of light in a vacuum is assumed to be a constant. However, hypothesis exist that make the affirmation about the speed of light not being a constant. The interpretation of this statement is as follows.
The speed of light is a dimensional quantity and so, as has been emphasized in this context by João Magueijo, it cannot be measured.[96] Measurable quantities in physics are, without exception, dimensionless, although they are often constructed as ratios of dimensional quantities. For example, when the height of a mountain is measured, what is really measured is the ratio of its height to the length of a meter stick. The conventional SI system of units is based on seven basic dimensional quantities, namely distance, mass, time, electric current, thermodynamic temperature, amount of substance, and luminous intensity.[97] These units are defined to be independent and so cannot be described in terms of each other. As an alternative to using a particular system of units, one can reduce all measurements to dimensionless quantities expressed in terms of ratios between the quantities being measured and various fundamental constants such as Newton's constant, the speed of light and Planck's constant; physicists can define at least 26 dimensionless constants which can be expressed in terms of these sorts of ratios and which are currently thought to be independent of one another.[98] By manipulating the basic dimensional constants one can also construct the Planck time, Planck length and Planck energy which make a good system of units for expressing dimensional measurements, known as Planck units.
See also[edit]
Science fiction
1. ^ Gonzalez-Diaz, P. F. (2000). "Warp drive space-time" (PDF). Physical Review D 62 (4): 044005. arXiv:gr-qc/9907026. Bibcode:2000PhRvD..62d4005G. doi:10.1103/PhysRevD.62.044005.
2. ^ Loup, F.; Waite, D.; Halerewicz, E. Jr. (2001). "Reduced total energy requirements for a modified Alcubierre warp drive spacetime". arXiv:gr-qc/0107097.
3. ^ Visser, M.; Bassett, B.; Liberati, S. (2000). "Superluminal censorship". Nuclear Physics B: Proceedings Supplement 88: 267–270. arXiv:gr-qc/9810026. Bibcode:2000NuPhS..88..267V. doi:10.1016/S0920-5632(00)00782-9.
4. ^ Visser, M.; Bassett, B.; Liberati, S. (1999). "Perturbative superluminal censorship and the null energy condition". AIP Conference Proceedings 493: 301–305. arXiv:gr-qc/9908023. doi:10.1063/1.1301601. ISBN 1-56396-905-X.
5. ^ a b See Salters Horners Advanced Physics A2 Student Book, Oxford etc. (Heinemann) 2001, pp. 302 and 303
6. ^ see
7. ^ a b c Gibbs, Philip (1997). "Is Faster-Than-Light Travel or Communication Possible?". University of California, Riverside. Retrieved 20 August 2008.
8. ^ Salmon, Wesley C. (2006). Four Decades of Scientific Explanation. University of Pittsburgh Pre. p. 107. ISBN 0-8229-5926-7. , Extract of page 107
9. ^ Steane, Andrew (2012). The Wonderful World of Relativity: A Precise Guide for the General Reader. Oxford University Press. p. 180. ISBN 0-19-969461-3. , Extract of page 180
10. ^ Special Theory of Relativity
11. ^ Hecht, Eugene (1987). Optics (2nd ed.). Addison Wesley. p. 62. ISBN 0-201-11609-X.
12. ^ Sommerfeld, Arnold (1907). "An Objection Against the Theory of Relativity and its Removal". Physikalische Zeitschrift 8 (23): 841–842.
13. ^ "MathPages - Phase, Group, and Signal Velocity". Retrieved 2007-04-30.
14. ^ Lijun Wang's experiment about supraluminal speed of light in a medium, L J Wang et al. 2000 Nature 406 277. Lijun Wang (French)
15. ^ Brillouin, Léon; Wave Propagation and Group Velocity, Academic Press, 1960
16. ^ Withayachumnankul, W.; et al.; "A systemized view of superluminal wave propagation," Proceedings of the IEEE, Vol. 98, No. 10, pp. 1775-1786, 2010
20. ^ "Cosmology Tutorial - Part 2". 2009-06-12. Retrieved 2011-09-26.
21. ^ "Inflationary Period from HyperPhysics". Retrieved 2011-09-26.
23. ^ a b Lineweaver, Charles; Davis, Tamara M. (2005). "Misconceptions about the Big Bang" (PDF). Scientific American. Retrieved 2008-11-06.
24. ^ Davis, Tamara M; Lineweaver, Charles H (2003). "Expanding Confusion:common misconceptions of cosmological horizons and the superluminal expansion of the universe". arXiv:astro-ph/0310808v2. doi:10.1071/AS03040. Retrieved 2015-02-09.
25. ^ Loeb, Abraham (2002). "The Long-Term Future of Extragalactic Astronomy". Physical Review D 65 (4). arXiv:astro-ph/0107568. Bibcode:2002PhRvD..65d7301L. doi:10.1103/PhysRevD.65.047301.
26. ^ Rees, Martin J. (1966). "Appearance of relativistically expanding radio sources". Nature 211 (5048): 468. Bibcode:1966Natur.211..468R. doi:10.1038/211468a0.
27. ^ Blandford, Roger D.; McKee, C. F.; Rees, Martin J. (1977). "Super-luminal expansion in extragalactic radio sources". Nature 267 (5608): 211. Bibcode:1977Natur.267..211B. doi:10.1038/267211a0.
28. ^ Feynman. "Chapter 3". QED. p. 89. ISBN 981-256-914-6.
29. ^ Zhang, Shanchao. "Single photons obey the speed limits". Physics. American Physical Society. Archived from the original on 2013-05-14. Retrieved 25 July 2011.
30. ^ Martinez, J. C.; and Polatdemir, E.; "Origin of the Hartman effect", Physics Letters A, Vol. 351, Iss. 1-2, 20 February 2006, pp. 31-36
31. ^ Hartman, Thomas E. (1962). "Tunneling of a wave packet". Journal of Applied Physics 33: 3427. doi:10.1063/1.1702424.
32. ^ a b c Nimtz, Günter; Stahlhofen, Alfons (2007). "Macroscopic violation of special relativity". arXiv:0708.0681 [quant-ph].
33. ^ Winful, Herbert G.; "Tunneling time, the Hartman effect, and superluminality: A proposed resolution of an old paradox", Physics Reports, Vol. 436, Iss. 1-2, December 2006, pp. 1-69
34. ^ "History". Retrieved 2011-09-26.
35. ^ Salart; Baas; Branciard; Gisin; Zbinden (2008). "Testing spooky action at a distance". Nature 454 (7206): 861–864. arXiv:0808.3316. Bibcode:2008Natur.454..861S. doi:10.1038/nature07121. PMID 18704081.
36. ^ "Delayed Choice Quantum Eraser". 2002-09-04. Retrieved 2011-09-26.
37. ^ Scientific American : Delayed-Choice Experiments
38. ^ The Reference Frame: Delayed Choice Quantum Eraser
39. ^ Einstein, Albert, Relativity:the special and the general theory, Methuen & Co, 1927, pp. 25-27
40. ^ Odenwald, Sten. "Special & General Relativity Questions and Answers: If we could travel faster than light, could we go back in time?". NASA Astronomy Cafe. Retrieved 7 April 2014.
41. ^ Gott, J. Richard (2002). "Time Travel in Einstein's Universe". pp. pp. 82–83.
42. ^ Petkov, Vesselin; Relativity and the Nature of Spacetime, p. 219
43. ^ Raine, Derek J.; Thomas, Edwin George; and Thomas, E. G.; An Introduction to the Science of Cosmology, p. 94
44. ^ "What is the 'zero-point energy' (or 'vacuum energy') in quantum physics? Is it really possible that we could harness this energy?". Scientific American. 1997-08-18. Retrieved 2009-05-27.
45. ^ Scharnhorst, Klaus (1990-05-12). "Secret of the vacuum: Speedier light". Retrieved 2009-05-27.
46. ^ Visser, Matt; Liberati, Stefano; Sonego, Sebastiano (2001-07-27). "Faster-than-c signals, special relativity, and causality". Annals of Physics 298: 167–185. arXiv:gr-qc/0107091. Bibcode:2002AnPhy.298..167L. doi:10.1006/aphy.2002.6233.
47. ^ Fearn, Heidi (2007). "Can Light Signals Travel Faster than c in Nontrivial Vacuua in Flat space-time? Relativistic Causality II". LaserPhys. 17 (5): 695–699. arXiv:0706.0553. Bibcode:2007LaPhy..17..695F. doi:10.1134/S1054660X07050155.
48. ^ Nimtz, Günter; Superluminal Tunneling Devices, 2001
49. ^ a b Winful, Herbert G. (2007-09-18). "Comment on "Macroscopic violation of special relativity" by Nimtz and Stahlhofen". arXiv:0709.2736 [quant-ph].
50. ^ Helling, Robert C.; "Faster than light or not" (blog)
51. ^ Anderson, Mark (18–24 August 2007). "Light seems to defy its own speed limit". New Scientist 195 (2617). p. 10.
52. ^ a b Winful, Herbert G. (December 2006). "Tunneling time, the Hartman effect, and superluminality: A proposed resolution of an old paradox" (PDF). Physics Reports 436 (1–2): 1–69. Bibcode:2006PhR...436....1W. doi:10.1016/j.physrep.2006.09.002.
53. ^ a b For a summary of Herbert G. Winful's explanation for apparently superluminal tunneling time which does not involve reshaping, see
54. ^ A number of papers are listed at Literature on Faster-than-light tunneling experiments
55. ^ Eckle, P.; et al., "Attosecond Ionization and Tunneling Delay Time Measurements in Helium", Science, 322 (2008) 1525
56. ^ Sokolovski, D. (8 February 2004). "Why does relativity allow quantum tunneling to 'take no time'?" (PDF). Proceedings of the Royal Society A 460 (2042): 499–506. Bibcode:2004RSPSA.460..499S. doi:10.1098/rspa.2003.1222.
57. ^ Lineweaver, Charles H.; and Davis, Tamara M. (March 2005). "Misconceptions about the Big Bang". Scientific American.
58. ^ Traveling Faster Than the Speed of Light: A New Idea That Could Make It Happen Newswise, retrieved on 24 August 2008.
59. ^ Heim, Burkhard (1977). "Vorschlag eines Weges einer einheitlichen Beschreibung der Elementarteilchen [Recommendation of a Way to a Unified Description of Elementary Particles]". Zeitschrift für Naturforschung 32a: 233–243. Bibcode:1977ZNatA..32..233H.
60. ^ McCulloch, M. E. (2014). Physics from the edge: a new cosmological model for inertia. World Scientific. ISBN 978-9814596251.
61. ^ Colladay, Don; Kostelecký, V. Alan (1997). "CPT violation and the standard model". Physical Review D 55 (11): 6760. arXiv:hep-ph/9703464. Bibcode:1997PhRvD..55.6760C. doi:10.1103/PhysRevD.55.6760.
62. ^ Colladay, Don; Kostelecký, V. Alan (1998). "Lorentz-violating extension of the standard model". Physical Review D 58 (11). arXiv:hep-ph/9809521. Bibcode:1998PhRvD..58k6002C. doi:10.1103/PhysRevD.58.116002.
63. ^ Kostelecký, V. Alan (2004). "Gravity, Lorentz violation, and the standard model". Physical Review D 69 (10). arXiv:hep-th/0312310. Bibcode:2004PhRvD..69j5009K. doi:10.1103/PhysRevD.69.105009.
64. ^ Gonzalez-Mestres, Luis (2009). "AUGER-HiRes results and models of Lorentz symmetry violation". Nuclear Physics B: Proceedings Supplements 190: 191–197. arXiv:0902.0994. Bibcode:2009NuPhS.190..191G. doi:10.1016/j.nuclphysbps.2009.03.088.
65. ^ a b Kostelecký, V. Alan; Russell, Neil (2011). "Data tables for Lorentz and CPT violation". Review of Modern Physics 83: 11. arXiv:0801.0287. Bibcode:2011RvMP...83...11K. doi:10.1103/RevModPhys.83.11.
66. ^ Kostelecký, V. Alan; and Samuel, S.; Spontaneous Breaking of Lorentz Symmetry in String Theory, Physical Review D 39, 683 (1989)
67. ^ "PhysicsWeb - Breaking Lorentz symmetry". 2004-04-05. Archived from the original on 2004-04-05. Retrieved 2011-09-26.
68. ^ Mavromatos, Nick E.; Testing models for quantum gravity, CERN Courier, (August 2002)
69. ^ Overbye, Dennis; Interpreting the Cosmic Rays, The New York Times, 31 December 2002
70. ^ Cornwall, Remi. "Secure Quantum Communication and Superluminal Signalling on the Bell Channel". arXiv:1106.2257.
71. ^ Cornwall, Remi. "Is the Consequence of Superluminal Signalling to Physics Absolute Motion through an Ether?". arXiv:1106.2258.
72. ^ Volovik, G. E. (2003). "The Universe in a helium droplet". International Series of Monographs on Physics 117: 1–507.
73. ^ Zloshchastiev, Konstantin G. (2009). "Spontaneous symmetry breaking and mass generation as built-in phenomena in logarithmic nonlinear quantum theory". Acta Physica Polonica B 42 (2): 261–292. arXiv:0912.4139. doi:10.5506/APhysPolB.42.261.
74. ^ Avdeenkov, Alexander V.; Zloshchastiev, Konstantin G. (2011). "Quantum Bose liquids with logarithmic nonlinearity: Self-sustainability and emergence of spatial extent". Journal of Physics B: Atomic, Molecular and Optical Physics 44 (19): 195303. arXiv:1108.0847. Bibcode:2011JPhB...44s5303A. doi:10.1088/0953-4075/44/19/195303.
75. ^ Zloshchastiev, Konstantin G.; Chakrabarti, Sandip K.; Zhuk, Alexander I.; Bisnovatyi-Kogan, Gennady S. (2010). "Logarithmic nonlinearity in theories of quantum gravity: Origin of time and observational consequences". AIP Conference Proceedings. p. 112. arXiv:0906.4282. Bibcode:2010AIPC.1206..112Z. doi:10.1063/1.3292518.
76. ^ Zloshchastiev, Konstantin G. (2011). "Vacuum Cherenkov effect in logarithmic nonlinear quantum theory". Physics Letters A 375 (24): 2305. arXiv:1003.0657. Bibcode:2011PhLA..375.2305Z. doi:10.1016/j.physleta.2011.05.012.
77. ^ Adamson, P.; Andreopoulos, C.; Arms, K.; Armstrong, R.; Auty, D.; Avvakumov, S.; Ayres, D.; Baller, B. et al. (2007). "Measurement of neutrino velocity with the MINOS detectors and NuMI neutrino beam". Physical Review D 76 (7). arXiv:0706.0437. Bibcode:2007PhRvD..76g2005A. doi:10.1103/PhysRevD.76.072005.
78. ^ Overbye, Dennis (22 September 2011). "Tiny neutrinos may have broken cosmic speed limit". New York Times. That group found, although with less precision, that the neutrino speeds were consistent with the speed of light.
79. ^ "MINOS reports new measurement of neutrino velocity". Fermilab today. June 8, 2012. Retrieved June 8, 2012.
80. ^ Adam; Agafonova; Aleksandrov; Altinok; Alvarez Sanchez; Aoki; Ariga; Ariga; Autiero (2011). "Measurement of the neutrino velocity with the OPERA detector in the CNGS beam". arXiv:1109.4897 [hep-ex].
81. ^ Cho, Adrian; Neutrinos Travel Faster Than Light, According to One Experiment, Science NOW, 22 September 2011
82. ^ Overbye, Dennis (18 November 2011). "Scientists Report Second Sighting of Faster-Than-Light Neutrinos". New York Times. Retrieved 2011-11-18.
83. ^ Adam, T.; (OPERA Collaboration) et al. (17 November 2011). "Measurement of the neutrino velocity with the OPERA detector in the CNGS beam". arXiv:1109.4897v2 [hep-ex].
84. ^ Reuters: Study rejects "faster than light" particle finding
85. ^ ICARUS collaboration (March 15, 2012). "Measurement of the neutrino velocity with the ICARUS detector at the CNGS beam". arXiv:1203.3433.
86. ^ Strassler, M. (2012) "OPERA: What Went Wrong"
88. ^ Gates, S. James. "Superstring Theory: The DNA of Reality".
89. ^ Chodos, A.; Hauser, A. I.; and Kostelecký, V. Alan; The Neutrino As A Tachyon, Physics Letters B 150, 431 (1985)
90. ^ Chodos, Alan; Kostelecký, V. Alan; IUHET 280 (1994). "Nuclear Null Tests for Spacelike Neutrinos". Physics Letters B 336 (3–4): 295–302. arXiv:hep-ph/9409404. Bibcode:1994PhLB..336..295C. doi:10.1016/0370-2693(94)90535-5.
91. ^ Chodos, Alan; Kostelecký, V. Alan; Potting, R.; and Gates, E.; Null experiments for neutrino masses, Modern Physics Letters A7, 467 (1992)
92. ^ List of articles on the tachyonic neutrino idea (may be incomplete). InSPIRE database. Parity Violation and Neutrino Mass Tsao Chang
93. ^ Chang, Taso; Parity Violation and Neutrino Mass, Nuclear Science and Techniques, Vol. 13, No. 3 (2002) 129
94. ^ Hughes, R. J.; and Stephenson, G. J., Jr.; Against tachyonic neutrinos, Physics Letters B 244, 95-100 (1990)
95. ^ Gimon, Eric G.; Hořava, Petr (2004). "Over-rotating black holes, Gödel holography and the hypertube". arXiv:hep-th/0405019 [hep-th].
96. ^ Magueijo, João; Albrecht, Andreas (1999). "A time varying speed of light as a solution to cosmological puzzles". Physical Review D 59 (4). arXiv:astro-ph/9811018. Bibcode:1999PhRvD..59d3516A. doi:10.1103/PhysRevD.59.043516.
97. ^ "SI base units".
98. ^ "constants".
External links[edit]
Scientific links[edit]
Proposed FTL Methods links[edit] |
68cf6fd4d8d724e5 | Become a fan of Slashdot on Facebook
Forgot your password?
Comment Re:Impossible (Score 1) 600
>We still have no idea what caused the Universe to exist
Stop please. If you had even an inkling of what you were talking about, you would know that question doesn't even make sense.
>Interestingly there are thousands of doctors that question the massive vaccination policies today, but we are not allowed to debate any of their merits because it's taught as fact that vaccines cause no harm. (also makes a few companies assloads of money, go figure...).
I would like so see studies supporting your wild claims, and the names of the "thousands" of doctors who question vaccination. On the contrary, tens of thousands of papers and studies overwhelmingly conclude vaccination is beneficial. Stop being a conspiracy nut, it's not good for you.
Comment Re:!P is not NP and NP-Hard is not NP-Complete (Score 1) 199
From TFA:
As it was shown in this paper, solving the Schrödinger equation for any Schrödinger Hamiltonian is a problem at least as hard as the hardest problems in the NP computational complexity class. This implies that unless P and NP collapse into one class (which is very unlikely as it would imply many startling results that are currently believed to be false), coming up with the exact solution to Schrödinger’s equation for an arbitrary system will inevitably involve exhaustive search over an exponentially large set of all possible candidate solutions. As a result, computational resources required by an algorithm using brute force will grow so rapidly with the system microscopic con- stituent particle number that bringing any additional resources to bear on the algorithm will be just of no value. And so, for anyone living in the real physical world (of limited computational recourses) the Schrödinger equation will turn out to be simply unsolvable for macroscopic objects and accordingly inapplicable to their time evolution portrayal.
In other words, in the case, in which P class is not equal to NP, it is impossible to overlap de- terministic quantum and classical descriptions in order to obtain a rigorous derivation of classical properties from quantum mechanics.
Comment Re:If you're concerned... (Score 1) 351
Sure. For a userbase so keen on protecting freedoms and fighting mass surveillance and government control, fighting for silly gun rights that the big govt is trying to take you and all, you seem way to ready to take the piss out of bitcoin, and to keep holding on to fiat. I just don't get the hate.
Comment Re:Perhaps not (Score 1) 598
Did you read your parent post? Is it possible that it doesn't make sense to you?? How can you be fine with protecting my right to incite violence against a group of people? Is it not EXACTLY what Hitler did in the decade prior to WWII? Again, this particular case is ridiculous, but your viewpoint is frankly disturbing...
Comment Re:Perhaps not (Score 1) 598
Is it though? What lead to WWII if not a small group of men exercising their freedom of speech to the point of getting a big nation to become a murderous war machine that killed 70 million people and transformed the world as we know it? You got to draw the line somewhere. Not saying I agree with this particular case, but freedom of speech is far from an absolutely sacred right.
Comment Re:Nosy Parkers (Score 1) 598
Bad words are the sole thing that led to the deaths of 70 million people in 1939-1945. Bad words turned the Axis powers into murderous war machines, not anything else. You go tell the 70 million dead of WWII that Hitler's bad words can't harm anyone. You got to draw the line somewhere. Not saying I agree with this particular case, but freedom of speech is far from an absolutely sacred right.
Comment Re:Perhaps not (Score 1) 598
??? What the fuck? Last time I checked, climate change was not a religious belief being shouted by loonies. It's a scientifical fact: a truth. When you say "climate change proponents" you may have the mistaken assumption that there is some kind of debate over this matter. Like the religious nuts who think there is an ongoing debate about evolution. No there is not god damn it. It's a fact!
That is a fallacious argument. I can drive at 200km/h in the motorway and reach my destination safe and sound, as I can respect all speed laws and still die on the road. Regardless, driving at 200km/h is still more dangerous than driving at 120km/h. Likewise, of course a racedriver could probably cruise at double the speed limit relatively safely, but that does not mean that it is safe to drive at those speeds. The problem is with the driver AND with the car. The Porsche is a dangerous car. The last line of your post reminds me how much this isn't a 'news for nerds' site.
Comment Re:quickoffice is free and available to any Androi (Score 1) 178
If you have rooted it, as I assume a /. user would have, there are a million apps to uninstall preinstalled apps, for example: Just be careful not to uninstall the Phone app or something.
|
a1ba814ad9bbb2a0 | This Quantum World/Implications and applications/Beyond hydrogen: the Periodic Table
From Wikibooks, open books for an open world
Jump to: navigation, search
Beyond hydrogen: the Periodic Table[edit]
If we again assume that the nucleus is fixed at the center and ignore relativistic and spin effects, then the stationary states of helium are the solutions of the following equation:
E\,{\partial\psi\over\partial t}=-{\hbar^2\over2m}
\left[{\partial^2\psi\over\partial x_1^2}+{\partial^2\psi\over\partial y_1^2}+{\partial^2\psi\over\partial z_1^2}+ {\partial^2\psi\over\partial x_2^2}+{\partial^2\psi\over\partial y_2^2}+{\partial^2\psi\over\partial z_2^2}\right]+ \left[-\frac{2e^2}{r_1}-\frac{2e^2}{r_2}+\frac{e^2}{r_{12}}\right]\psi.
The wave function now depends on six coordinates, and the potential energy V is made up of three terms. r_1=\sqrt{x_1^2+y_1^2+z_1^2} and r_2=\sqrt{x_2^2+y_2^2+z_2^2} are associated with the respective distances of the electrons from the nucleus, and r_{12}=\sqrt{(x_2{-}x_1)^2+ (y_2{-}y_1)^2 +(z_2{-}z_1)^2} is associated with the distance between the electrons. Think of e^2/r_{12} as the value the potential energy associated with the two electrons would have if they were at r_1 and r_2, respectively.
Why are there no separate wave functions for the two electrons? The joint probability of finding the first electron in a region A and the second in a region B (relative to the nucleus) is given by
If the probability of finding the first electron in A were independent of the whereabouts of the second electron, then we could assign to it a wave function \psi_1(\mathbf{r}_1), and if the probability of finding the second electron in B were independent of the whereabouts of the first electron, we could assign to it a wave function \psi_2(\mathbf{r}_2). In this case \psi(\mathbf{r}_1,\mathbf{r}_2) would be given by the product \psi_1(\mathbf{r}_1)\,\psi_2(\mathbf{r}_2) of the two wave functions, and p(A,B) would be the product of p(A)=\int_A\!d^3r_1\,|\psi(\mathbf{r}_1)|^2 and p(B)=\int_B\!d^3r_2\,|\psi(\mathbf{r}_2)|^2. But in general, and especially inside a helium atom, the positional probability distribution for the first electron is conditional on the whereabouts of the second electron, and vice versa, given that the two electrons repel each other (to use the language of classical physics).
For the lowest energy levels, the above equation has been solved by numerical methods. With three or more electrons it is hopeless to look for exact solutions of the corresponding Schrödinger equation. Nevertheless, the Periodic Table and many properties of the chemical elements can be understood by using the following approximate theory.
First,we disregard the details of the interactions between the electrons. Next, since the chemical properties of atoms depend on their outermost electrons, we consider each of these atoms subject to a potential that is due to (i) the nucleus and (ii) a continuous, spherically symmetric, charge distribution doing duty for the other electrons. We again neglect spin effects except that we take account of the Pauli exclusion principle, according to which the probability of finding two electrons (more generally, two fermions) having exactly the same properties is 0. Thus two electrons can be associated with exactly the same wave function provided that their spin states differ in the following way: whenever the spins of the two electrons are measured with respect to a given axis, the outcomes are perfectly anticorrelated; one will be "up" and the other will be "down". Since there are only two possible outcomes, a third electron cannot be associated with the same wave function.
This approximate theory yields stationary wave functions \psi_{nlm}(\mathbf{r}) called orbitals for individual electrons. These are quite similar to the stationary wave functions one obtains for the single electron of hydrogen, except that their dependence on the radial coordinate is modified by the negative charge distribution representing the remaining electrons. As a consequence of this modification, the energies associated with orbitals with the same quantum number n but different quantum numbers l are no longer equal. For any given n\geq1, obitals with higher l yield a larger mean distance between the electron and the nucleus, and the larger this distance, the more the negative charge of the remaining electrons screens the positive charge of the nucleus. As a result, an electron with higher l is less strongly bound (given the same n), so its ionization energy is lower.
Chemists group orbitals into shells according to their principal quantum number. As we have seen, the n-th shell can "accommodate" up to n^2\times2 electrons. Helium has the first shell completely "filled" and the second shell "empty." Because the helium nucleus has twice the charge of the hydrogen nucleus, the two electrons are, on average, much nearer the nucleus than the single electron of hydrogen. The ionization energy of helium is therefore much larger, 2372.3 J/mol as compared to 1312.0 J/mol for hydrogen. On the other hand, if you tried to add an electron to create a negative helium ion, it would have to go into the second shell, which is almost completely screened from the nucleus by the electrons in the first shell. Helium is therefore neither prone to give up an electron not able to hold an extra electron. It is chemically inert, as are all elements in the rightmost column of the Periodic Table.
In the second row of the Periodic Table the second shell gets filled. Since the energies of the 2p orbitals are higher than that of the 2s orbital, the latter gets "filled" first. With each added electron (and proton!) the entire electron distribution gets pulled in, and the ionization energy goes up, from 520.2 J/mol for lithium (atomic number Z=3) to 2080.8 J/mol for neon (Z=10). While lithium readily parts with an electron, fluorine (Z=9) with a single empty "slot" in the second shell is prone to grab one. Both are therefore quite active chemically. The progression from sodium (Z=11) to argon (Z=18) parallels that from lithium to neon.
There is a noteworthy peculiarity in the corresponding sequences of ionization energies: The ionization energy of oxygen (Z=8, 1313.9 J/mol) is lower than that of nitrogen (Z=7, 1402.3 J/mol), and that of sulfur (Z=16, 999.6 J/mol) is lower than that of phosphorus (Z=15, 1011.8 J/mol). To understand why this is so, we must take account of certain details of the inter-electronic forces that we have so far ignored.
Suppose that one of the two 2p electrons of carbon (Z=6) goes into the m{=}0 orbital with respect to the z axis. Where will the other 2p electron go? It will go into any vacant orbital that minimizes the repulsion between the two electrons, by maximizing their mean distance. This is neither of the orbitals with |m|{=}1 with respect to the z axis but an orbital with m{=}0 with respect to some axis perpendicular to the z axis. If we call this the x axis, then the third 2p electron of nitrogen goes into the orbital with m{=}0 relative to y axis. The fourth 2p electron of oxygen then has no choice but to go — with opposite spin — into an already occupied 2p orbital. This raises its energy significantly and accounts for the drop in ionization from nitrogen to oxygen.
By the time the 3p orbitals are "filled," the energies of the 3d states are pushed up so high (as a result of screening) that the 4s state is energetically lower. The "filling up" of the 3d orbitals therefore begins only after the 4s orbitals are "occupied," with scandium (Z=21).
Thus even this simplified and approximate version of the quantum theory of atoms has the power to predict the qualitative and many of the quantitative features of the Period Table. |
4aa851570f8dfa7f | Take the tour ×
When I read descriptions of the many-worlds interpretation of quantum mechanics, they say things like "every possible outcome of every event defines or exists in its own history or world", but is this really accurate? This seems to imply that the universe only split at particular moments when "events" happen. This also seems to imply that the universe only splits into a finite number of "every possible outcome".
I imagine things differently. Rather than splitting into a finite number of universes at discrete times, I imagine that at every moment the universe splits into an uncountably infinite number of universes, perhaps as described by the Schrödinger equation.
Which interpretation is right? (Or otherwise, what is the right interpretation?) If I'm right, how does one describe such a vast space mathematically? Is this a Hilbert space? If so, is it a particular subset of Hilbert space?
share|improve this question
No one has any justifiable unique answers to such questions. The many-worlds interpretation isn't an actual theory of physics, an actual set of rules, ideas, or equations. It's just a vague and, when looked with any precision, meaningless and vacuous philosophical paradigm. Obviously, proper quantum mechanics doesn't imply any splitting whatsoever. Any rule when a splitting occurs is bound to be unnatural. The only "splitting" that proper QM allows is an approximate one, given by decoherence: the moment when the chances of parts of $\psi$ to "re-interfere" in the future are negligible. – Luboš Motl Jul 21 '12 at 7:11
@LubošMotl your statement that "Obviously, proper quantum mechanics doesn't imply any splitting whatsoever." I don't really understand in this contex. They are not explaining splitting, but the state vector reduction/collapse of the wavefunction. I agree that the many-world interpretation is pysically flawed and has no mathematical basis as a theory. However, interpretations like the Many-Minds/multi-consciousness interpretation do. Moreover, this particular theory is complete, well defined and cannot be disprooved from a physical stand-point. Of course, this does not make it correct! – Killercam Jul 21 '12 at 10:43
add comment
2 Answers
Many worlders won't tell you this dirty little secret but how often splitting happens, and how many worlds there are, depends upon the choice of coarse graining, and the coarse graining resolution. No, it's not possible to ramp up the coarse graining all the way to the finest levels because a decoherence/coherence threshold would be crossed. And no, there is no canonical coarse graining either.
The preferred basis depends upon the environment. Always. What is the preferred basis for a closed self-contained universe?
share|improve this answer
A more accurate answer than "The preferred basis depends upon the environment. Always." would be that the supporters of the MWI haven't yet described any other mechanism by which it could arise, just as they haven't yet shown how the Born rule would emerge even for a finite system. – Niel de Beaudrap Jul 21 '12 at 11:52
add comment
The Many Worlds interpretation is popularly misunderstood. The wave function itself contains a spectrum of universes, one corresponding to each eigenvalue for a given operator. The "splitting" of the "many worlds" is represented by the time evolution of the wave function described by the Schrodinger equation. As Lubos mentions above, these "universes" only become separate through decoherence.
Consider, for example, a wave function in the position-basis given by a delta-function at x=0. This represents one universe. Now time-evolve the wave function using the schrodinger equation. The delta-function has now spread-out a bit. It is peaked at x=0, but has non-zero values at x=+1 and x=-1. This represents the existence of universes in which the position of the particle is at x=0, x=+1, and x=-1. In some sense there are "more" universes at x=0 than at x=+-1, because the wave function is more highly peaked at x=0. This is where some of the difficulty in the Many Worlds interpretation comes in: what ontology to use to describe the "splitting", "how many universes" are at x=0 vs x=+-1, and so on. The main point I want to make is that the "splitting" is just an interpretation of what is happening with the evolution of the wave function according to the schrodinger equation. Nothing "more" is actually happening. You model the "splitting" using the tried-and-true schrodinger evolution of the wave function.
share|improve this answer
You imply that there is a spectrum (a countable infinity) of "possible universes". But is it actually a continuum (an uncountable infinity) of "possible universes"? Can the delta-function have non-zero values at locations everywhere between 0 and +/-1? Or maybe a better example (since I don't understand the delta-function), in the double-slit experiment, can't a particular photon hit the detector plane at any point on the plane? (<- thus uncountable infinite possible universes) – John Berryman Jul 21 '12 at 13:04
@John Berryman The word 'spectrum' does not imply a countable infinity. It is a continuum representing an uncountably infinite number of universes in the example I gave. You can think of a delta function like a very narrow spike. The schrodinger equation time-evolves a narrow spike into a wider and wider gaussian shape. In the example, in order to keep things simple, I approximated this as {-1,0,1) (a very rough approximation, but serves to illustrate the point). – user1247 Jul 21 '12 at 19:05
add comment
Your Answer
|
ba14014a94e925b8 | Orbital hybridization
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Four sp3 orbitals.
Three sp2 orbitals.
In chemistry, hybridization (or hybridisation) is the concept of mixing atomic orbitals into new hybrid orbitals suitable for the pairing of electrons to form chemical bonds in valence bond theory. Hybrid orbitals are very useful in the explanation of molecular geometry and atomic bonding properties. [note 1]
Historical development[change | edit source]
Chemist Linus Pauling first used hybridization theory to explain the structure of molecules such as methane (CH4).[1] This concept was developed for such simple chemical systems. But the approach was later applied more widely. Today, chemists use it to explain the structures of organic compounds.
Orbitals represent how electrons behave within molecules. In the case of simple hybridization, this approximation is based on atomic orbitals. Chemists use the atomic orbitals of the hydrogen atom, which the only atom for which an exact analytic solution to its Schrödinger equation is known. In heavier atoms, like carbon, nitrogen, and oxygen, the atomic orbitals involved in bonding are the 2s and 2p orbitals. These orbitals can be occupied in a hydrogen atom, but only when the electron is in an excited state. Hybrid orbitals are assumed to be mixtures of these atomic orbitals, superimposed on each other in various proportions. It provides a quantum mechanical insight to Lewis structures. Chemists use hybridization theory mainly in organic chemistry.
spx and sdx terminology[change | edit source]
This terminology describes the weight of the respective components of a hybrid orbital. For example, in methane, the C hybrid orbital which forms each C-H bond consists of 25% s character and 75% p character and is thus described as sp3 (read as s-p-three) hybridised. Quantum mechanics describes this hybrid as an sp3 wavefunction of the form N[s + (√3)pσ], where N is a normalization constant (here 1/2) and pσ is a p orbital directed along the C-H axis to form a sigma bond. The p-to-s ratio (denoted λ in general) is √3 in this example, and N2λ2 = 3/4 is the p character or the weight of the p component.
In general, for an atom with s and p orbitals forming hybrids hi and hj with included angle \theta, the following holds: 1 + \lambdai\lambdaj cos(\theta) = 0. The p-to-s ratio for hybrid i is \lambdai2, and for hybrid j it is \lambdaj2. The bond directed towards a more electronegative substituent tends to have higher p-character as stated in Bent's rule. In the special case of equivalent hybrids on the same atom, again with included angle \theta, the equation reduces to just 1 + \lambda2 cos(\theta) = 0. For example, BH3 has a trigonal planar geometry, three 120° bond angles, three equivalent hybrids about the boron atom, and thus 1 + \lambda2 cos(\theta) = 0 becomes 1 + \lambda2 cos(120°) = 0, giving \lambda2 = 2 for the p-to-s ratio. In other words, sp2 hybrids.
An analogous notation is used to describe sdx hybrids. For example, the permanganate ion (MnO4-) has sd3 hybridisation with orbitals that are 25% s and 75% d.
Types of Hybridization[change | edit source]
sp3 hybrids[change | edit source]
A schematic presentation of hybrid orbitals overlapping hydrogens' s orbitals translates into Methane's tetrahedral shape
sp2 hybrids[change | edit source]
Ethene structure
For this molecule, carbon will sp2 hybridize, because one π (pi) bond is required for the double bond between the carbons, and only three σ bonds are formed per carbon atom. In sp2 hybridization the 2s orbital is mixed with only two of the three available 2p orbitals:
forming a total of 3 sp2 orbitals with one p-orbital remaining. In ethylene (ethene), the two carbon atoms form a σ bond by overlapping two sp2 orbitals and each carbon atom forms two covalent bonds with hydrogen by ssp2 overlap all with 120° angles. The π bond between the carbon atoms perpendicular to the molecular plane is formed by 2p–2p overlap. The hydrogen–carbon bonds are all of equal strength and length, which agrees with experimental data.
sp hybrids[change | edit source]
Hybridisation and molecule shape[change | edit source]
Hybridisation helps to explain molecule shape:
Classification Main group Transition metal[3]
• Linear (180°)
• sp hybridisation
• E.g., CO2
• Bent (90°)
• sd hybridisation
• E.g., VO2+
• Tetrahedral (109.5°)
• sd3 hybridisation
• E.g., MnO4
AX5 -
AX6 -
Main group compounds with lone pairs[change | edit source]
For main group compounds with lone electron pairs, the s orbital lone pair (analogous to s-p mixing in molecular orbital theory) can be hybridised to a certain extent with the bond pairs[5] to maximize energetic stability according to its Walsh diagram. This rationalisation is applied to explain deviations in ideal bond angles (i.e. only p orbitals used for bonding), most commonly in second and third period elements.
• Trigonal pyramidal (AX3E1)
• E.g., NH3
• Bent (AX2E1-2)
• E.g., SO2, H2O
• Monocoordinate (AX1E1-3)
• E.g., CO, SO, HF
Hybridisation of hypervalent molecules[change | edit source]
Traditional description[change | edit source]
Classification Main group Transition metal
AX2 -
• Linear (180°)
• sp hybridisation
• E.g., Ag(NH3)2+
AX3 -
AX4 -
• Octahedral (90°)
• d2sp3 hybridisation
• E.g., Mo(CO)6
AX9 -
Resonance description[change | edit source]
Classification Main group Transition metal
AX2 - Linear (180°)
Di silv.svg
AX3 - Trigonal planar (120°)
Tri copp.svg
AX4 - Square planar (90°)
Tetra plat.svg
Penta phos.png
• Fractional hybridisation (s and d orbitals)
• E.g., Fe(CO)5
AX6 Octahedral (90°) Octahedral (90°)
Hexa sulf.png Hexa moly.svg
Hepta iodi.svg
• Fractional hybridisation (s and three d orbitals)
• E.g., V(CN)74−
AX8 Square antiprismatic Square antiprismatic
• Fractional hybridisation (s and three p orbitals)
• E.g., IF8
• Fractional hybridisation (s and four d orbitals)
• E.g., Re(CN)83−
AX9 - Tricapped trigonal prismatic
• Fractional hybridisation (s and five d orbitals)
• E.g., ReH92−
Main group compounds with lone pairs[change | edit source]
Regular bonding component (marked in red)
Bent Monocoordinate -
Tetra sulf.svg Tri chlo.svg Di xeno.svg
Penta chlo.svg Tetra xeno.svg
Hexa xeno.svg Penta xeno.svg
Clarifying misconceptions[change | edit source]
VSEPR electron domains and hybrid orbitals are different[change | edit source]
The simplistic picture of hybridisation taught in conjunction with VSEPR theory does not agree with high-level theoretical calculations[5] despite its widespread usage in many textbooks. For example, following the guidelines of VSEPR, the hybridization of the oxygen in water is described with two equivalent lone electron-pairs.[7] However, molecular orbital calculations give orbitals that reflect the C2v symmetry of the molecule.[8] One of the two lone pairs is in a pure p-type orbital, with its electron density perpendicular to the H–O–H framework.[9] The other lone pair is in an approximately sp0.8 orbital that is in the same plane as the H–O–H bonding.[9] Photoelectron spectra confirm the presence of two different energies for the nonbonded electrons.[10]
Exclusion of d-orbitals in main group compounds[change | edit source]
Exclusion of p-orbitals in transition metal complexes[change | edit source]
Similarly, p-orbitals have long been thought to be utilized by transition metal centers in bonding with ligands, hence the 18-electron description; however, recent molecular orbital calculations have found that such p-orbital participation in bonding is insignificant,[12] even though the contribution of the p-function to the molecular wavefunction is calculated to be somwhat larger than that of the d-function in main group compounds.
Hybridization theory vs. MO theory[change | edit source]
Hybridization theory is an integral part of organic chemistry and in general discussed together with molecular orbital theory in advanced organic chemistry textbooks although for different reasons. One textbook notes that for drawing reaction mechanisms sometimes a classical bonding picture is needed with two atoms sharing two electrons.[13] It also comments that predicting bond angles in methane with MO theory is not straightforward. Another textbook treats hybridization theory when explaining bonding in alkenes[14] and a third[15] uses MO theory to explain bonding in hydrogen but hybridisation theory for methane.
Other pages[change | edit source]
Other websites[change | edit source]
Notes[change | edit source]
1. Although sometimes taught together with the valence shell electron-pair repulsion (VSEPR) theory, valence bond and hybridization are in fact not related to the VSEPR model. Gillespie, R.J. (2004), "Teaching molecular geometry with the VSEPR model", Journal of Chemical Education 81 (3): 298–304, doi:10.1021/ed081p298
References[change | edit source]
2. McMurray, J. (1995). Chemistry Annotated Instructors Edition (4th ed.). Prentice Hall. p. 272. ISBN 0-13-140221-8
4. 4.0 4.1 Martin Kaupp Prof. Dr. (2001). ""Non-VSEPR" Structures and Bonding in d(0) Systems.". Angew Chem Int Ed Engl. 40 (1): 3534–3565. doi:10.1002/1521-3773(20011001)40:19<3534::AID-ANIE3534>3.0.CO;2-#.
5. 5.0 5.1 Weinhold, Frank. "Rabbit Ears Hybrids, VSEPR Sterics, and Other Orbital Absurdities". University of Wisconsin. http://isites.harvard.edu/fs/docs/icb.topic818673.files/Lecture%202%20-%20Weinhold%20et%20al%20-%20Shape%20of%20Oxygen%20Lone%20Pairs.pdf. Retrieved 2012-11-11.
8. Levine I.N. “Quantum chemistry” (4th edn, Prentice-Hall) p. 470–2
10. Levine p. 475
12. O’Donnell, Mark (2012). "Investigating P-Orbital Character In Transition Metal-to-Ligand Bonding". Brunswick, ME: Bowdoin College. http://www.bowdoin.edu/student-fellowships/pdf/summer-2012/ODonnell%20SRR.pdf. Retrieved 2012-09-16.
15. Organic Chemistry 3rd Ed. 2001 Paula Yurkanis Bruice ISBN 0-13-017858-6 |
63a57d8783355653 | Skip to main content
Fractional-calculus diffusion equation
Sequel to the work on the quantization of nonconservative systems using fractional calculus and quantization of a system with Brownian motion, which aims to consider the dissipation effects in quantum-mechanical description of microscale systems.
The canonical quantization of a system represented classically by one-dimensional Fick's law, and the diffusion equation is carried out according to the Dirac method. A suitable Lagrangian, and Hamiltonian, describing the diffusive system, are constructed and the Hamiltonian is transformed to Schrodinger's equation which is solved. An application regarding implementation of the developed mathematical method to the analysis of diffusion, osmosis, which is a biological application of the diffusion process, is carried out. Schrödinger's equation is solved.
The plot of the probability function represents clearly the dissipative and drift forces and hence the osmosis, which agrees totally with the macro-scale view, or the classical-version osmosis.
In this paper we aim to consider the dissipation effects, appeared in the will-known diffusion process, quantum-mechanically depending on the procedure of the quantization of nonconservative systems using fractional calculus [14], which was also applied on the related phenomenon, the Brownian motion [5].
Most of the natural laws of physics, such as Maxwell's equations, Newton's laws of motion, and Schrödinger equation, are stated, or can be, in terms of partial deffirential equations (PDEs), that is, these laws describe physical phenomena by relating space and time derivatives. Diffusion equation, or heat flow, is one of the most important PDEs in physical sciences. The basic process in the diffusion phenomenon is the flow of the fluid from a region of higher density to one of lower density [6].
The tendency of a statistical ensemble to achieve thermodynamic equilibrium with a uniform distribution of states for its constituent subsystems does not have to be monotonic in time. In general, equilibration takes place in stages and is characterized by several stochastization times with vastly different orders of magnitude. Thermodynamic equilibrium has no absolute meaning and depends on the time scale over which a given process is analyzed.
In a diffusion process or chemical reaction, Fick's law provides a linear relationship between the flux of molecules and the chemical potential difference. Likewise, a direct proportionality exists between the heat flux and the temperature difference in a thermally conducting slab, as expressed by Fourier's law. Diffusion of gases between air in the lungs and blood proceeds in the direction from high to low concentration, and the rate of diffusion is greatest when the difference in concentration is greatest. Diffusion obeys Fick's law, but the actual rate of exchange is greatly affected by hemoglobin in the blood.
Diffusion equation
Diffusion is macroscopically associated with a gradient of concentration. In contrast to the mass flow of liquids, diffusion involves random spontaneous movements of individual molecules. The diffusion flux is expressed in number of particles traversing a unit area per unit time and the concentration in number of particles per unit volume. This process can be quantified by a constant known as the diffusion coefficient, D, of the material, given in general by the Stokes-Einstein equation:
where k B is the Boltzmann constant, T is the absolute temperature in K, and f is a frictional coefficient. The diffusion coefficient, D, is defined as the net flow of particles per unit time across an imaginary plane of unit area lying at right angles to the concentration gradient, that gradient also having unit strength.
Hydrodynamic properties of macromolecules like diffusion, viscosity and sedimentation are affected by the frictional forces between molecules of the diffused material and those of the ambient material. Since this frictional force is in opposition to motion we can include this in the equation of motion as [7]:
where f is the frictional coefficient, (dv/dt) is the acceleration and m is the mass of the molecule. In the case of spherical particles, the translational frictional force f is proportional to the fluid viscosity,η, and radius r of the particle. Thus the coefficient of friction for spherical particles, known as Stokes law, is
The frictional coefficient f comes into effect when a molecule moves through a medium. The movement of the molecule could be either diffusion or sedimentation and the driving force, F, can be the concentration gradient, the force of gravity or the centrifugal force. According to Fick's law, the rate of diffusion across a boundary (dn/dt: the number of molecules which pass through a cross section A in unit time) for a single solute component diffusing in a system at constant temperature and pressure is given by [7]:
where is the concentration gradient. A concentration gradient implies that the concentration of the molecules (i.e. the solute) varies with distance r in one dimension.
1-The continuity equation. The equation of continuity assures us that flow is equal at any point, whatever the degree of tapering. If the cross-sections and corresponding velocities at two points are, respectively, A1, A2, v1, and v2, from the equation of continuity, we have:
This is a simplified version of the one-dimensional continuity equation whose differential form is
2-Fick's law. Fick's law states that the rate of diffusion per unit area in a direction perpendicular to the area is proportional to the gradient of concentration of solute in that direction. The concentration is the mass of solute per unit volume, and the gradient of concentration is the change in concentration per unit distance. If the concentration changes from C1 to a lower value of C2 over a short length (d), then the mass (m) of the solute diffusing down the pipe in time (t) is
This is a simplified version of Fick's law whose differential form is
3- The diffusion equation. In cells without sources, the diffusion equation is written as:
The concentration gradient across the boundary is given as [6]:
where C0 is the total solute concentration difference across the boundary.
Quantization of diffusion process
Diffusion can be considered as movement of molecules from a higher concentration to a lower concentration. In reference to Eq. (2), the forces acting on the diffused particle are the driving and the friction forces:
In order to construct the Hamiltonian of the Diffused particle we should obtain the potential corresponding to this force. By using the formula [8] (see the appendix):
which enables us to have the potential of a nonconservative force, the potential corresponding to the velocity dependent term which represents the frictional force dissipation effect is
Where (see the appendix)
The driving force is the random force may be represented as a sequence of impulses between the particle assembly; in the same way we think of pressure that it is just the force per unit area due to a tremendous number of impacts of individual molecules. Hence, we can replace the potential that produces the force of one impulse or one collision V'(x) by - δ(x'-x) and the entire potential, V(x), will be written as
Using the identities [9]
which leads to
Thus, the force F(x') is obtained directly
At the same time, could be written as
this random force may expressed spatially, instead of its time dependence, in the same way as
By making use of Eq.(19)
Thus, we obtain a definition of the random force, that agrees with our assumption and with the fact that F(x) = 0 of Eq.(2).
The Lagrangian of the Diffused particle is
The generalized Euler-Lagrange equation for this problem, reads as [10, 11]:
That leads to
which is the classical equation of motion of the diffused particle, Eq.(2).
The canonical momenta are [10, 11]:
Making use of the Hamiltonian definition [1, 2]
the Hamiltonian of the Diffused particles is
Here, p0 and p1/2 are the canonical conjugate momenta to q0 and q1/2 respectively.
Schrödinger equation reads [1, 2]
Making use of Eqs. (32 and 33), Schrödinger equation reads as:
Using the method of separation of variables, the relations in the appendix, and defining Ψ as
we find that the time-dependent part is
and has the solution
The other part is:
where q0 = x and q1/2 = y.
Now, let x = uy. Substituting into Eq. (36), we have
As an approximation, we assume constant values of u. This leads to
For yy' Eq.(39) is reduced to
which has the solution[12, 13]
H n being Hermite polynomials.
The y = y' of Eq.(39) will be ignored since the impulse potential effects will be considered in the part of Eq. (37) which will be written as
where we assumed y is a constant. Using the identity [9]
will reduce Eq.(42) to
which has the solution[9]
where H(u - u') Heaviside step function.
In terms of q s(i) , Ψ is expressed as
Application: osmosis
Osmosis is a physical phenomenon that has been extensively studied by scientists in various disciplines of science and engineering. Early researchers studied the mechanism of osmosis through natural materials, and from the 1960s, special attention has been given to osmosis through synthetic materials. Following the progress in membrane science in the last few decades, especially for reverse osmosis applications, the interests in engineered applications of osmosis has been spurred. Osmosis, or as it is currently referred to as forward osmosis, has new applications in separation processes for wastewater treatment, food processing, and seawater/brackish water desalination. Other unique areas of forward osmosis research include pressure-retarded osmosis for generation of electricity from saline and fresh water and implantable osmotic pumps for controlled drug release. This paper provides the state-of-the-art of the physical principles and applications of forward osmosis as well as their strengths and limitations.
Osmosis is usually defined as the transport of molecules in a fluid through a semipermeable membrane due to an imbalance in its concentration on either side of the membrane [6]. Osmosis may be by diffusion, but it may also be a bulk flow through pores in a membrane. If a plant cell is put into a concentrated solution of sugar, for example, Fig. 1, the pressure on the right is then, in either case, water moves from a region of high concentration to a region of low concentration greater than the pressure on the left by an amount hρg, where ρ is the density of the liquid on the right and is called the relative osmotic pressure. The general formula for the osmotic pressure P of a solution containing n moles of solute per volume V of solvent is [6]
Figure 1
Osmosis [6].
The net osmotic pressure exerted on a semipermeable membrane separating the two compartments is thus the difference between the osmotic pressures of both compartments.
Making use of the proportionality of pressure and force, Eq. (11) becomes
Where F p is the force per unit area; thus Eq. (22) becomes
Thus, we obtain a definition of the new random force, but with a complete disagreement with our assumption with the fact that F(x) = 0 of Eq.(2). This disagreement appears from the fact that pressure exerted a net drift force F p on the particles in the direction of osmosis (Fig. 1).
At the same time Eq. (47) looks like
Fraction calculus is very helpful expressing the dissipation, as well as in quantizing nonconservative systems associated with many important physical problems: either where the ordinary quantum-mechanical treatment leads to an incomplete description, such as the energy loss by charged particles when passing through matter; or where it leads to complex nonlinear equations, such as Brownian motion, and diffusion.
By using fractional calculus in physical problems it is possible to create a whole mechanical description of nonconservative systems, including Lagrangian and Hamiltonian mechanics, canonical transformations, Hamilton-Jacobi theory, and quantum mechanics. In this paper, an important physical nonconservative system, which is the diffusion process is treated quantum-mechanically, for the first time using fractional-calculus.
Well-known biological process correlated diffusion is studied. Fig. 2 represents the fun probability function, |Ψ|2 of Eq. (51), connected to of the "osmosisized" particle including the drift, and frictional forces; the osmosis process is manifested very clearly, where the confinement of these particles to one region of space gradually leads to a situation in which the particles uniformly fills all the available space in the high concentration, where the new Heaviside step function, H p , is modified due to drift force to show non-step behavior; which agrees totally with the macro-scale view, or the classical-version osmosis.
Figure 2
Probability function. Probability function, |Ψ|2 of the diffused particle including the random, and frictional forces; the osmosis process manifested very clearly.
Diffusion and the diffusion equation are central topics in both Physics and Mathematics, and their ranges of applicability span from astrophysical dynamics to the diffusion of particles governed by Schrödinger's equation.
The quantization of a system with diffusion process has been carried out according to the theory proposed recently [1, 2]. A potential, and a Hamiltonian, corresponding to the random force, and dissipative force, were constructed. The relevant Schrödinger's equation has then been set and solved. The classical equation of motion, connected to diffused particle, could be obtained easily from the fractional Lagrangian. The random and frictional forces were plotted; the diffusion process manifested very clearly. The next step could be to study problems such as the correlation functions, transport equation, chemical potential, entropy, etc., on a quantum-mechanical basis.
An application of the developed mathematical method to the analysis of diffusion in a biological medium, osmosis, is carried out. Schrödinger's equation is solved. The plot of the probability function represents clearly the dissipative and drift forces and hence the osmosis, which shows the same macro-scale view of the osmosis.
Appendix: Fractional calculus
The fractional integral of a function f(t) is defined as [14, 15]
where Jα represents the fractional integral operator of order α, and R+ represents the set of positive real numbers.
If we introduce the positive integer m such that m -1 < αm the fractional derivative of order α > 0 may be defined as
Dabeing the fractional deferential operator of order a Equation (2) may be rewritten using Eq. (1) as follows:
Here, we formulate the problem in terms of the left fractional derivative the left Riemann-Liouville fractional derivatives, which are defined in Eqs. (A.1, A.2). Most of the left fractional operations also hold for the right ones. For the left operations f(t) must vanish for t < a while f(t) = 0 for t > b for the right operation. Thus, the left operations are causal. Conversely, the right operations are anti-causal [16]. From the physical point of view, when we differentiate with respect to time, the right differentiation represents an operation performed on the future state of the process f(t) [17].
Fractional integral and differential operators have the following properties [14, 15]:
For I, the identity operator:
but the inverse application of the two operators is not necessarily true.
For n > 0, Jnand Dnare linear operators, i.e.,
For a constant c, Jnand Dnare homogeneous operators, i.e.,
For α, β > 0, Jnobeys the additive index law, but not necessarily Dn, i.e.,
Of special importance are the fractional integrals and fractional derivatives of the function (t - a)β, which are given by
For α = 1/2 this equation is called semi-derivative; for α = - 1/2 it is called semi-integral.
Authors' information
Abdul-Wali Ajlouni (Jordan) is the chair of the Applied Physics Department at Tafila Technical University, Jordan. Prior to this post, he served as an Associate Dean of the Faculty of Science, and as an assistant Professor. Before joining TTU, Dr. Ajlouni was a researcher on radiation safety for the Ministry of Energy. He has received training on radiation protection and radiological emergency preparedness in Jordan, Turkey, the United States, Czechoslovakia, Egypt, Iran, and Syria. In addition, he has attended numerous conferences on nuclear energy, and Applied mathematics around the world. Dr. Ajlouni has published extensively on the two subjects, with titles such as: "Mathematical Model for Dispersion of Nuclear Pollutants in Jordan Atmosphere," and "Quantization of Brownian Motion." Dr. Ajlouni received a SCOPUS Scientific Research and the International Conference on Mathematics and Information Security (ICAMIS2009) awards in 2009. He holds a B.Sc. in Physics from Yarmouk University, Jordan, an M.S. in Nuclear Physics from Mustansyria University, Iraq, and a Ph.D. in Mathematical/Radiation Physics from the University of Jordan.
1. 1.
Ajlouni A-W: Ph.D. thesis: Quantization of Nonconservative Systems. 2004, Jordan University, Amman, Jordan
2. 2.
Ajlouni A-W, Invited Talk: Fractional Calculus Tools in Physics. Applied Mathematics & Information Sciences. 2010, Presented in the International Conference on Mathematics and Information Security (ICAMIS2009) Nov. 13th – 15th , 2009, Sohag University, Egypt.
3. 3.
Rabei E, Ajlouni A-W, Ghassib H: Quantization with Fractional Calculus. 9th WSEAS International Conference On Applied Mathematics (MATH '06), MATH, TELE-INFO and SIP '06, Istanbul, Turkey, May 27-29. 2006
4. 4.
Rabei E, Ajlouni A-W, Ghassib H: Quantization of Nonconservative Systems Using Fractional Calculus. Wseas Transactions on Mathematics. 2006, 5: 853-864.
5. 5.
Rabei E, Ajlouni A-W, Ghassib H: Quantization of Brownian Motion. Int J of Theoretical Physics. 2006, 45: 1619-1629.
6. 6.
Tuszynski J, Kurzynski M: Introduction to Molecular Biophysics. 2003, CRC Press, Florida
7. 7.
Vasantha Pattabhi V, Gautham N: Biophysics. 2002, Kluwer Academic Publishers,New York, USA
8. 8.
Rabei EM, Al-halholy T, Rousan A: Potentials of Arbitrary Forces with Fractional Derivatives. International Journal of Modern Physics A. 2004, 19: 3083-3089. 10.1142/S0217751X04019408.
9. 9.
Duffy D: Green's Functions with Applications. 2001, Chapman & Hall/CRC, New York, 1
10. 10.
Riewe F: Nonconservative Lagrangian and Hamiltonian Mechanics. Physical Review E. 1996, 53: 1890-1898. 10.1103/PhysRevE.53.1890.
11. 11.
Riewe F: Mechanics with Fractional Derivatives. Physical Review E. 1997, 55: 3581-3592. 10.1103/PhysRevE.55.3581.
12. 12.
Griffiths DJ: Introduction to quantum Mechanics. 1995, Prentice Hall. New Jersey
13. 13.
Merzbacher E: Quantum Mechanics. 1970, Wiley, 2
14. 14.
Oldham B, Spanier J: The Fractional Calculus. 1974, Academic Press, NewYork
15. 15.
Carpintri A, Mainardi F: Fractals and Fractional Calculus in Continuum Mechanics. 1997, Springer, New York
16. 16.
Dreisigmeyer DW, Young PM: Nonconservative Lagrangian mechanics: a generalized function approach. J Phys A: Math Gen. 2003, 36: 8297-8310. 10.1088/0305-4470/36/30/307.
17. 17.
Agrawal OP: Formulation of Euler-Lagrange equations for fractional variational problems. Journal of Mathematical Analysis and Applications. 2002, 272: 368-379. 10.1016/S0022-247X(02)00180-4.
Download references
Author information
Correspondence to Abdul-Wali MS Ajlouni.
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
AWA: 1) has made extensive contributions to conception and design of the manuscript, and analysis and interpretation of data; 2) has been involved in drafting the manuscript; and 3) has set the final approval of the version to be published. He has participated sufficiently in the work to take public responsibility for appropriate portions of the content.
HR: 1) has been involved in revising the manuscript critically in its mathematical content. He has participated sufficiently in the work to take public responsibility for appropriate portions of the content.
All authors read and approved the final manuscript.
Authors’ original submitted files for images
Authors’ original file for figure 1
Authors’ original file for figure 2
Rights and permissions
Reprints and Permissions
About this article
Cite this article
Ajlouni, A.M., Al-Rabai'ah, H.A. Fractional-calculus diffusion equation. Nonlinear Biomed Phys 4, 3 (2010).
Download citation
• Diffusion Equation
• Fractional Derivative
• Frictional Force
• Fractional Calculus
• Frictional Coefficient |
cdc3ad3b5f4e8137 | Inverse Spectral Theory for a Singular Sturm Liouville Operator with Coulomb Potential
DOI: 10.4236/apm.2016.61005 PDF HTML XML 4,650 Downloads 5,309 Views
We consider the inverse spectral problem for a singular Sturm-Liouville operator with Coulomb potential. In this paper, we give an asymptotic formula and some properties for this problem by using methods of Trubowitz and Poschel.
Share and Cite:
Panakhov, E. and Ulusoy, I. (2016) Inverse Spectral Theory for a Singular Sturm Liouville Operator with Coulomb Potential. Advances in Pure Mathematics, 6, 41-49. doi: 10.4236/apm.2016.61005.
Received 21 September 2015; accepted 18 January 2016; published 21 January 2016
1. Introduction
The Sturm-Liouville equation is a second order linear ordinary differential equation of the form
for some and. It was first introduced in an 1837 publication [1] by the eminent French mathematicians Joseph Liouville and Jacques Charles François Sturm. The Sturm-Liouville Equation (1.1) can easily be reduced to form
If we assume that p(x) has a continuous first derivative, and p(x), r(x) have a continuous second derivative, then by means of the substitutions
where c is given by
Equation (1.1) assumes the form (1.2) replaced by; where
The transformation of the general second order equation to canonical form and the asymptotic formulas for the eigenvalues and eigenfunctions was given by Liouville. A deep study of the distribution of the zeros of eigenfunctions was done by Sturm. Firstly, the formula for the distribution of the eigenvalues of the single dimensional Sturm operator defined in the whole of the straight-line axis with increasing potential in the infinity was given by Titchmarsh in 1946 [2] [3] . Titchmarsh also showed the distribution formula for the Schrödinger Operator. In later years, Levitan improved the Titchmarsh’s method and found important asymptotic formula for the eigenvalues of different differential operators [4] [5] . Sturm-Liouville problems with a singularity at zero have various versions. The best known case is the one studied by Amirov [6] [7] , in which the potential has a Coulomb-type singularity
at the origin. In these works, properties of spectral characteristic were studied for Sturm-Liouville operators with Coulomb potential, which have discontinuity conditions inside a finite interval. Panakhov and Sat estimated nodal points and nodal lengths for the Sturm-Liouville operators with Coulomb potential [8] -[10] . Basand Metin defined a fractional singular Sturm-Liouville operator having Coulomb potential of type A/x [11] .
Let’s give some fundamental physical properties of the Sturm-Liouville operator with Coulomb potential. Learning about the motion of electrons moving under the Coulomb potential is of significance in quantum theory. Solving these types of problems provides us with finding energy levels of not only hydrogen atom but
also single valance electron atoms such as sodium. For the Coulomb potential is given by, where r
is the radius of the nucleus, e is electronic charge. According to this, we use time-dependent Schrödinger equation
where is the wave function, h is Planck’s constant and m is the mass of electron.
In this equation, if the Fourier transform is applied
it will convert to energy equation dependent on the situation as follows:
Therefore, energy equation in the field with Coulomb potential becomes
If this hydrogen atom is substituted to other potential area, then energy equation becomes
If we make the necessary transformation, then we can get a Sturm-Liouville equation with Coulomb potential
where is a parameter which corresponds to the energy [12] .
Our aim here is to find asymptotic formulas for singular Sturm-Liouville operatör with Coulomb potential with domain
Also, we give the normalizing eigenfunctions and spectral functions.
2. Basic Properties
We consider the singular Sturm-Liouville problem
where the function. Let us denote by the solution of (2.1) satisfying the initial condition
and by the solution of same equation, satisfying the initial condition
Lemma 1. The solution of problem (2.1) and (2.2) has the following form:
Proof. Since satisfies Equation (2.1), we have
Integrating the first integral on the right side by parts twice and taking the conditions (2.2) into account, we find that
which is (2.4).
Lemma 2. The solution of problem (2.1) and (2.3) has the following form:
Proof. The proof is the same as that of Lemma 1.
Now we give some estimates of and which will be used later. For each fixed x in [0, 1] the map is an entire function on which is real-valued on [13] . Using the estimate
we get
Since and
we have
From (2.6) the inequality is easily checked
where c is uniform with respect to q on bounded sets in.
Lemma 3 (Counting Lemma). [13] Let and be an integer. Then has exactly N roots, counted with multiplicities, in the open half plane
and for each, exactly one simple root in the egg shaped region
There are no other roots.
From this Lemma there exists an integer N such that for every there is only one eigenvalue in Thus for every
can be chosen independent of q on bounded sets of. Following theorem [13] shows that the eigenvalues are the zeroes of the map and these zeroes are simple.
Theorem 1. If is Dirichlet eigenvalue of q in, then
In particular,. Thus, all roots of are simple.
Proof. The proof is similar as that of ([13] , Pöschel and Trubowitz).
3. Asymptotic Formula
We need the following lemma for proving the main result.
Lemma 4. For every f in,
. (3.2)
Proof. Firstly, we shall prove the relation (3.1)
By the Cauchy-Schwarz inequality, we get
Since f is in, the last two integrals are equal to
So (3.3) is equivalent to
Finally, we shall prove the relation (3.2)
This proves the lemma.
The main result of this article is the following theorem:
Theorem 2. For,
Proof of the Main Theorem. Since it must be. Because is a nontrivial solution of Equation (2.1) satisfying Dirichlet boundary conditions, we have
From (2.7) someone gets the inequality
From (3.5) integral in the equation of (3.4) takes the form
By using difference formulas for sine we have
From Lemma 4 we get
Thus, by using this inequality (3.4) can be written in the form
From (2.8) we conclude that
Since and, (3.7) is equivalent to
So we get
. (3.8)
From (2.8) we have
In this case, the theorem is proved.
From this theorem, the map
from q to its sequences of Dirichlet eigenvalues sends into S. Later, we need this map to characterize spectra which is equivalent to determining the image of.
4. Inverse Spectral Theory
To each eigenvalue we associate a unique eigenfunction normalized by
Let’s define the normalizing eigenfunction:
Lemma 5. For,
This estimate holds uniformly on bounded subsets of.
Proof. Let and. By the basic estimate for,
By using this estimate we have
So we get
Thus we conclude that
Dividing by we get
Also, we need to have asymptotic estimates of the squares of the eigenfunctions and products
Lemma 6. For,
This estimate holds uniformly on bounded subsets of.
Proof. We know that
By the basic estimate for, we have
The map is real analytic on. Now we give asymptotic behavior for.
Theorem 3. Each is a compact, real analytic function on with
Its gradient is
The error terms are uniform on bounded subsets of.
Proof. From [14] we have
So we calculate the integral
Finally, since, we get
By the Cauchy-Schwarz inequality, we prove the theorem.
Formula (4.3) shows that belongs to. By Theorem 3, the map
from q to its sequences of -values maps into the. So we obtain a map
from into the.
Theorem 4. [13] is one-to-one on.
Let be the Frechet derivative of the map at q.
Theorem 5. [14] is an isomorphism from onto.
Conflicts of Interest
The authors declare no conflicts of interest.
[1] Sturm, C. and Liouville, J. (1837) Extrait d.un m emoire sur le d eveloppement des fonctions en series dont les di erents terms sont assujettis a satisfaire a une m eme equation di er entielle lin eaire, contenant un param etre variable. Journal de Math ematiques Pures et Appliqu ees. Journal de Mathématiques Pures et Appliquées, 2, 220-233.
[2] Birkhoff, G.D. (1908) Boundary Value and Expansion Problems of Ordinary Linear Differential Equations. Transactions of the American Mathematical Society, 9, 219-231.
[3] Titchmarsh, E.C. (1946) Eigenfunction Expansions Associated with Second-Order Differential Equations. Vol. 1, Clarendon Press, Oxford.
[4] Titchmarsh, E.C. (1958) Eigenfunction Expansions Associated with Second-Order Differential Equations. Vol. 2, Clarendon Press, Oxford.
[5] Levitan, B.M. (1978) On the Determination of the Sturm-Liouville Operator from One and Two Spectra. Mathematics of the USSR Izvestija, 12, 179-193.
[6] Amirov, R.Kh. (1985) Inverse Problem for the Sturm-Liouville Equation with Coulomb Singularity Its Spectra. Kand. Dissertasiya, Baku.
[7] Topsakal, N. and Amirov, R. (2010) Inverse Problem for Sturm-Liouville Operators with Coulomb Potential Which Have Discontinuity Conditions inside an Interval. Mathematical Physics, Analysis and Geometry, 13, 29-46.
[8] Sat, M. and Panakhov, E.S. (2012) Inverse Nodal Problem for Sturm-Liouville Operators with Coulomb Potential. International Journal of Pure and Applied Mathematics, 80, 173-180.
[9] Sat, M. and Panakhov, E.S. (2013) Reconstruction of Potential Function for Sturm-Liouville Operator with Coulomb Potential. Boundary Value Problems, 2013, Article 49.
[10] Sat, M. (2014) Half Inverse Problem for the Sturm-Liouville Operator with Coulomb Potential. Applied Mathematics and Information Sciences, 8, 501-504.
[11] Bas, E. and Metin, F. (2013) Fractional Singular Sturm-Liouville Operator for Coulomb Potential. Advances in Difference Equations, Article ID: 300.
[12] Blohincev, D.I. (1949) Foundations of Quantum Mechanics. GITTL, Moscow.
[13] Poeschel, J. and Trubowitz, E. (1987) Inverse Spectral Theory. Academic Press, San Diego.
[14] Guillot, J.-C. and Ralston, J.V. (1988) Inverse Spectral Theory for a Singular Sturm-Liouville Operatör on [0,1]. Journal of Differential Equations, 76, 353-373.
comments powered by Disqus
Copyright © 2020 by authors and Scientific Research Publishing Inc.
Creative Commons License
|
61c9407e17b0e097 | Monads, Vector Spaces and Quantum Mechanics pt. II
I had originally intended to write some code to simulate quantum computers and implement some quantum algorithms. I’ll probably eventually do that but today I just want to look at quantum mechanics in its own right as a kind of generalisation of probability. This is probably going to be the most incomprehensible post I’ve written in this blog. On the other hand, even though I eventually talk about the philosophy of quantum mechanics, there’s some Haskell code to play with at every stage, and the codeives the same results as appear in physics papers, so maybe that will help give a handle on what I’m saying.
First get some Haskell fluff out of the way:
> import Prelude hiding (repeat)
> import Data.Map (toList,fromListWith)
> import Complex
> infixl 7 .*
Now define certain types of vector spaces. The idea is the a W b a is a vector in a space whose basis elements are labelled by objects of type a and where the coefficients are of type p.
This is very similar to standard probability monads except that I’ve allowed the probabilities to be types other than Float. Now we need a couple of ways to operate on these vectors.
mapW allows the application of a function transforming the probabilities…
and fmap applies a function to the basis element labels.
> instance Functor (W b) where
We want our vectors to support addition, multiplication, and actually form a monad. The definition of >>= is similar to that for other probability monads. Note how vector addition just concatenates our lists of probabilities. The problem with this is that if we have a vector like a+2a we’d like it to be reduced to 3a but in order to do that we need to be able to spot that the two terms a and 2a both contain multiples of the same vector, and to do that we need the fact that the labels are instances of Eq. Unfortunately we can’t do this conveniently in Haskell because of the lack of restricted datatypes and so to collect similar terms we need to use a separate collect function:
> instance Num b => Monad (W b) where
> return x = W [(x,1)]
> a .* b = mapW (a*) b
> _ * _ = error "Num is annoying"
> abs _ = error "Num is annoying"
> signum _ = error "Num is annoying"
> collect = W . toList . fromListWith (+) . runW
Now we can specialise to the two monads that interest us:
> type P a = W Float a
> type Q a = W (Complex Float) a
P is the (hopefully familiar if you’ve read Eric’s recent posts) probability monad. But Q allows complex probabilities. This is because quantum mechanics is a lot like probability theory with complex numbers and many of the rules of probability theory carry over.
Suppose we have a (non-quantum macroscopic) coin that we toss. It’s state might be described by:
> data Coin = Heads | Tails deriving (Eq,Show,Ord)
> coin1 = 0.5 .* return Heads + 0.5 .* return Tails :: P Coin
Suppose that if Albert sees a coin that is heads up he has a 50% chance of turning it over and if he sees a coin that is tails up he has a 25% chance of turning it over. We can describe Albert like this:
> albert Heads = 0.5 .* return Heads + 0.5 .* return Tails
> albert Tails = 0.25 .* return Heads + 0.75 .* return Tails
We can now ask what happens if Albert sees a coin originally turned up heads n times in a row:
> repeat 0 f = id
> repeat n f = repeat (n-1) f . f> (->-) :: a -> (a -> b) -> b
> g ->- f = f g
> g ->< f = g >>= f
> albert1 n = return Heads ->- repeat n (->< albert) ->- collect
Let me explain those new operators. ->- is just function application written from left to right. The > in the middle is intended to suggest the direction of data flow. ->< is just >>= but I’ve written it this way with the final < intended to suggest the way a function a -> M b ‘fans out’. Anyway, apropos of nothing else, notice how Albert approaches a steady state as n gets larger.
Quantum mechanics works similarly but with the following twist. When we come to observe the state of a quantum system it undergoes the following radical change:
Ie. the quantum state becomes an ordinary probabilistic one. This is called wavefunction collapse. Before collapse, the complex weights are called ‘amplitudes’ rather than probabilities. The business of physicists is largely about determining what these amplitudes are. For example, the well known Schrödinger equations is a lot like a kind of probabilistic diffusion, like a random walk, except with complex probabilities instead of amplitudes. (That’s why so many physicists have been hired into finance firms in recent years – stocks follow a random walk which has formal similarities to quantum physics.)
The rules of quantum mechanics are a bit like those of probability theory. In probability theory the sum of the probabilites must add to one. In addition, any process (like albert) must act in such a way that if the input sum of probabilities is one, then so is the output. This means that probabilistic process are stochastic. In quantum mechanics the sum of the squares of the magnitudes of the amplitudes must be one. Such a state is called ‘normalised’. All processes must be such that normalised inputs go to normalised outputs. Such processes are called unitary ones.
There’s a curious subtlety present in quantum mechanics. In classical probability theory you need to have the sum of the probabilities of your different events to sum to one. But it’s no good having events like “die turns up 1”, “die turns up 2”, “die turns up even” at the same time. “die turns up even” includes “die turns up 2”. So you always need to work with a mutually exclusive set of events. In quantum mechanics it can be pretty tricky to figure out what the mutually exclusive events are. For example, when considering the spin of an electron, there are no more mutually exclusive events beyond “spin up” and “spin down”. You might think “what about spin left?”. That’s just a mixture of spin up and spin down – and that fact is highly non-trivial and non-obvious. But I don’t want to discuss that now and it won’t affect the kinds of things I’m considering below.
So here’s an example of a quantum process a bit like albert above. For any angle \theta, rotate turns a boolean state into a mixture of boolean states. For \theta=0 it just leaves the state unchanged and for \theta=\pi it inverts the state so it corresponds to the function Not. But for \theta=\pi/2 it does something really neat: it is a kind of square root of Not. Let’s see it in action:
> rotate :: Float -> Bool -> Q Bool
> snot = rotate (pi/2)
> repeatM n f = repeat n (>>= f)
> snot1 n = return True ->- repeatM n snot ->- observe
We can test it by running snot1 2 to see that two applications take you to where you started but that snot1 1 gives you a 50/50 chance of finding True or False. Nothing like this is possible with classical probability theory and it can only happen because complex numbers can ‘cancel each other out’. This is what is known as ‘destructive interference’. In classical probability theory you only get constructive interference because probabilities are always positive real numbers. (Note that repeatM is just a monadic version of repeat – we could have used it to simplify albert1 above so there’s nothing specifically quantum about it.)
Now for two more combinators:
> (=>=) :: P a -> (a -> b) -> P b
> g =>= f = fmap f g> (=><) :: P (Q a) -> (a -> Q b) -> P (Q b)
> g =>< f = fmap (>>= f) g
The first just uses fmap to apply the function. I’m using the = sign as a convention that the function is to be applied not at the top level but one level down within the datastructure. The second is simply a monadic version of the first. The reason we need the latter is that we’re going to have systems that have both kinds of uncertainty – classical probabilistic uncertainty as well as quantum uncertainty. We’ll also want to use the fact that P is a monad to convert doubly uncertain events to singly uncertain ones. That’s what join does:
> join :: P (P a) -> P a
> join = (>>= id)
OK, that’s enough ground work. Let’s investigate a physical process that can be studied in the lab: the Quantum Zeno effect, otherwise known as the fact that a watched pot never boils. First an example related to snot1:
> zeno1 n = return True ->- repeatM n (rotate (pi/fromInteger n)) ->- collect ->- observe
The idea is that we ‘rotate’ our system through an angle \pi/n but we do so in n stages. The fact that we do it in n stages makes no difference, we get the same result as doing it in one go. The slight complication is this: suppose we start with a probabilistic state of type P a. If we let it evolve quantum mechanically it’ll turn into something of type P (Q a). On observation we get something of type P (P a). We need join to get a single probability distribution of type P a. The join is nothing mysterious, it just combines the outcome of two successive probabilistic processes into one using the usual laws of probability.
But here’s a variation on that theme. Now we carry out n stages, but after each one we observe the system causing wavefunction collapse:
> zeno2 n = return True ->- repeat n (
> \x -> x =>= return =>< rotate (pi/fromInteger n) =>= observe ->- join
> ) ->- collect
Notice what happens. In the former case we flipped the polarity of the input. In this case it remains closer to the original state. The higher we make n the closer it stays to its original state. (Not too high, start with small n. The code suffers from combinatorial explosion.) Here‘s a paper describing the actual experiment. Who needs all that messing about with sensitive equipment when you have a computer? 🙂
A state of the form P (Q a) is called a mixed state. Mixed states can get a bit hairy to deal with as you have this double level of uncertainty. It can get even trickier because you can sometimes observe just part of a quantum system rather than the whole system like oberve does. This inevitably leads mixed states. von Neumann came up with the notion of a density matrix to deal with this, although a P (Q a) works fine too. I also have a hunch there is an elegant way to handle them through an object of type P (Q (Q a)) that will eliminate the whole magnitude squared thing. However, I want to look at the quantum Zeno effect in another way that ultimately allows you deal with mixed states in another way. Unfortunately I don’t have time to explain this elimination today, but we can look at the general approach.
In this version I’m going to consider a quantum system that consists of the logical state in the Zeno examples, but also include the state of the observer. Now standard dogma says you can’t can’t form quantum states out of observers. In other words, you can’t form Q Observer where Observer is the state of the observer. It says you can only form P Observer. Whatever. I’m going to represent an experimenter using a list that representing the sequence of measurements they have made. Represent the complete system by a pair of type ([Bool],Bool). The first element of the pair is the experimenter’s memory and the second element is the state of the boolean variable being studied. When our experimenter makes a measurement of the boolean variable, its value is simply prepended to his or her memory:
> zeno3 n = return ([],True) ->- repeatM n (
> \(m,s) -> do
> s' <- rotate (pi/fromInteger n) s
> return (s:m,s')
> ) ->- observe =>= snd ->- collect
Note how we now delay the final observation until the end when we observe both the experimenter and the poor boolean being experimented on. We want to know the probabilities for the final boolean state so we apply snd so as to discard the state of the observer’s memory. Note how we get the same result as zeno2. (Note no mixed state, just an expanded quantum state that collapses to a classical probabilistic state.)
There’s an interesting philosophical implication in this. If we model the environment (in this case the experimenter is part of that environment) as part of a quantum system, we don’t need all the intermediate wavefunction collapses, just the final one at the end. So are the intermediate collapses real or not? The interaction with the environment is known as decoherence and some hope that wavefunction collapse can be explained away in terms of it.
Anyway, time for you to go and do something down-to-earth like gardening. Me, I’m washing the kitchen floor…
I must mention an important cheat I made above. When I model the experimenter’s memory as a list I’m copying the state of the measured experiment into a list. But you can’t simply copy data into a quantum register. One way to see this is that unitary processes are always invertible. Copy data into a register destroys the value that was there before and hence is not invertible. So instead, imagine that we really have an array that starts out zeroed out and that each time something is added to the list, the new result is xored into the next slot in the array. The list is just non-unitary looking, but convenient, shorthand for this unitary process.
PS I’m not convinced this LateX stuff is working. If something doesn’t make sense, looking around for equations that might have floated away 🙂 PPS wordpress teething troubles! I see that despite LaTeX support, wordpress is even more annoying than blogger in its handling of less than and greater than. It actually discards your HTML source sometimes. I’ve fixed one bit of discarded text. Additionally it also seems to discard backslash for no good reason. If you see something of the form a -> b it probably ought to be “backslash” a -> b. Apart from that, the code seems to work when copied and pasted from the blog into ghc.
1. David House
Posted March 6, 2007 at 3:04 pm | Permalink
More annoying than the missing backslashes are a few missing newlines. E.g. where you define (->-), the type signature appears on the same line as the previous function. Same with (=>
2. David House
Posted March 6, 2007 at 3:09 pm | Permalink
Oh, and those notes about LaTeX and so on might be more useful at the top of the post! 🙂
3. sigfpe
Posted March 6, 2007 at 4:00 pm | Permalink
The code at the old location actually works. That took all of 3 seconds to get working whereas I could never get the code posted here to work.
Post a Comment to David House
Required fields are marked *
%d bloggers like this: |
cbf8b6a26e402920 | The Book of Universes by John D. Barrow (2011)
This book is twice as long and half as good as Barrow’s earlier primer, The Origin of the Universe.
In that short book Barrow focused on the key ideas of modern cosmology – introducing them to us in ascending order of complexity, and as simply as possible. He managed to make mind-boggling ideas and demanding physics very accessible.
This book – although it presumably has the merit of being more up to date (published in 2011 as against 1994) – is an expansion of the earlier one, an attempt to be much more comprehensive, but which, in the process, tends to make the whole subject more confusing.
The basic premise of both books is that, since Einstein’s theory of relativity was developed in the 1910s, cosmologists and astronomers and astrophysicists have:
1. shown that the mathematical formulae in which Einstein’s theories are described need not be restricted to the universe as it has traditionally been conceived; in fact they can apply just as effectively to a wide variety of theoretical universes – and the professionals have, for the past hundred years, developed a bewildering array of possible universes to test Einstein’s insights to the limit
2. made a series of discoveries about our actual universe, the most important of which is that a) it is expanding b) it probably originated in a big bang about 14 billion years ago, and c) in the first few milliseconds after the bang it probably underwent a period of super-accelerated expansion known as the ‘inflation’ which may, or may not, have introduced all kinds of irregularities into ‘our’ universe, and may even have created a multitude of other universes, of which ours is just one
If you combine a hundred years of theorising with a hundred years of observations, you come up with thousands of theories and models.
In The Origin of the Universe Barrow stuck to the core story, explaining just as much of each theory as is necessary to help the reader – if not understand – then at least grasp their significance. I can write the paragraphs above because of the clarity with which The Origin of the Universe explained it.
In The Book of Universes, on the other hand, Barrow’s aim is much more comprehensive and digressive. He is setting out to list and describe every single model and theory of the universe which has been created in the past century.
He introduces the description of each model with a thumbnail sketch of its inventor. This ought to help, but it doesn’t because the inventors generally turn out to be polymaths who also made major contributions to all kinds of other areas of science. Being told a list of Paul Dirac’s other major contributions to 20th century science is not a good way for preparing your mind to then try and understand his one intervention on universe-modelling (which turned, in any case, out to be impractical and lead nowhere).
Another drawback of the ‘comprehensive’ approach is that a lot of these models have been rejected or barely saw the light of day before being disproved or – more complicatedly – were initially disproved but contained aspects or insights which turned out to be useful forty years later, and were subsequently recycled into revised models. It gets a bit challenging to try and hold all this in your mind.
In The Origin of the Universe Barrow sticks to what you could call the canonical line of models, each of which represented the central line of speculation, even if some ended up being disproved (like Hoyle and Gold and Bondi’s model of the steady state universe). Given that all of this material is pretty mind-bending, and some of it can only be described in advanced mathematical formulae, less is definitely more. I found The Book of Universes simply had too many universes, explained too quickly, and lost amid a lot of biographical bumpf summarising people’s careers or who knew who or contributed to who’s theory. Too much information.
One last drawback of the comprehensive approach is that quite important points – which are given space to breathe and sink in in The Origin of the Universe are lost in the flood of facts in The Book of Universes.
I’m particularly thinking of Einstein’s notion of the cosmological constant which was not strictly necessary to his formulations of relativity, but which Einstein invented and put into them solely in order to counteract the force of gravity and ensure his equations reflected the commonly held view that the universe was in a permanent steady state.
This was a mistake and Einstein is often quoted as admitting it was the biggest mistake of his career. In 1965 scientists discovered the cosmic background radiation which proved that the universe began in an inconceivably intense explosion, that the universe was therefore expanding and that the explosive, outward-propelling force of this bang was enough to counteract the contracting force of the gravity of all the matter in the universe without any need for a hypothetical cosmological constant.
I understand this (if I do) because in The Origin of the Universe it is given prominence and carefully explained. By contrast, in The Book of Universes it was almost lost in the flood of information and it was only because I’d read the earlier book that I grasped its importance.
The Book of Universes
Barrow gives a brisk recap of cosmology from the Sumerians and Egyptians, through the ancient Greeks’ establishment of the system named after Ptolemy in which the earth is the centre of the solar system, on through the revisions of Copernicus and Galileo which placed the sun firmly at the centre of the solar system, on to the three laws of Isaac Newton which showed how the forces which govern the solar system (and more distant bodies) operate.
There is then a passage on the models of the universe generated by the growing understanding of heat and energy acquired by Victorian physicists, which led to one of the most powerful models of the universe, the ‘heat death’ model popularised by Lord Kelvin in the 1850s, in which, in the far future, the universe evolves to a state of complete homegeneity, where no region is hotter than any other and therefore there is no thermodynamic activity, no life, just a low buzzing noise everywhere.
But this is all happens in the first 50 pages and is just preliminary throat-clearing before Barrow gets to the weird and wonderful worlds envisioned by modern cosmology i.e. from Einstein onwards.
In some of these models the universe expands indefinitely, in others it will reach a peak expansion before contracting back towards a Big Crunch. Some models envision a static universe, in others it rotates like a top, while other models are totally chaotic without any rules or order.
Some universes are smooth and regular, others characterised by clumps and lumps. Some are shaken by cosmic tides, some oscillate. Some allow time travel into the past, while others threaten to allow an infinite number of things to happen in a finite period. Some end with another big bang, some don’t end at all. And in only a few of them do the conditions arise for intelligent life to evolve.
The Book of Universes then goes on, in 12 chapters, to discuss – by my count – getting on for a hundred types or models of hypothetical universes, as conceived and worked out by mathematicians, physicists, astrophysicists and cosmologists from Einstein’s time right up to the date of publication, 2011.
A list of names
Barrow namechecks and briefly explains the models of the universe developed by the following (I am undertaking this exercise partly to remind myself of everyone mentioned, partly to indicate to you the overwhelming number of names and ideas the reader is bombarded with):
• Aristotle
• Ptolemy
• Copernicus
• Giovanni Riccioli
• Tycho Brahe
• Isaac Newton
• Thomas Wright (1771-86)
• Immanuel Kant (1724-1804)
• Pierre Laplace (1749-1827) devised what became the standard Victorian model of the universe
• Alfred Russel Wallace (1823-1913) discussed the physical conditions of a universe necessary for life to evolve in it
• Lord Kelvin (1824-1907) material falls into the central region of the universe and coalesce with other stars to maintain power output over immense periods
• Rudolf Clausius (1822-88) coined the word ‘entropy’ in 1865 to describe the inevitable progress from ordered to disordered states
• William Jevons (1835-82) believed the second law of thermodynamics implies that universe must have had a beginning
• Pierre Duhem (1961-1916) Catholic physicist accepted the notion of entropy but denied that it implied the universe ever had a beginning
• Samuel Tolver Preson (1844-1917) English engineer and physicist, suggested the universe is so vast that different ‘patches’ might experience different rates of entropy
• Ludwig Boltzmann and Ernst Zermelo suggested the universe is infinite and is already in a state of thermal equilibrium, but just with random fluctuations away from uniformity, and our galaxy is one of those fluctuations
• Albert Einstein (1879-1955) his discoveries were based on insights, not maths: thus he saw the problem with Newtonian physics is that it privileges an objective outside observer of all the events in the universe; one of Einstein’s insights was to abolish the idea of a privileged point of view and emphasise that everyone is involved in the universe’s dynamic interactions; thus gravity does not pass through a clear, fixed thing called space; gravity bends space.
The American physicist John Wheeler once encapsulated Einstein’s theory in two sentences:
Matter tells space how to curve. Space tells matter how to move. (quoted on page 52)
• Marcel Grossmann provided the mathematical underpinning for Einstein’s insights
• Willem de Sitter (1872-1934) inventor of, among other things, the de Sitter effect which represents the effect of the curvature of spacetime, as predicted by general relativity, on a vector carried along with an orbiting body – de Sitter’s universe gets bigger and bigger for ever but never had a zero point; but then de Sitter’s model contains no matter
• Vesto Slipher (1875-1969) astronomer who discovered the red shifting of distant galaxies in 1912, the first ever empirical evidence for the expansion of the galaxy
• Alexander Friedmann (1888-1925) Russian mathematician who produced purely mathematical solutions to Einstein’s equation, devising models where the universe started out of nothing and expanded a) fast enough to escape the gravity exerted by its own contents and so will expand forever or b) will eventually succumb to the gravity of its own contents, stop expanding and contract back towards a big crunch. He also speculated that this process (expansion and contraction) could happen an infinite number of times, creating a cyclic series of bangs, expansions and contractions, then another bang etc
A graphic of the oscillating or cyclic universe (from Discovery magazine)
• Arthur Eddington (1882-1944) most distinguished astrophysicist of the 1920s
• George Lemaître (1894-1966) first to combine an expanding universe interpretation of Einstein’s equations with the latest data about redshifting, and show that the universe of Einstein’s equations would be very sensitive to small changes – his model is close to Eddington’s so that it is often called the Eddington-Lemaître universe: it is expanding, curved and finite but doesn’t have a beginning
• Edwin Hubble (1889-1953) provided solid evidence of the redshifting (moving away) of distant galaxies, a main plank in the whole theory of a big bang, inventor of Hubble’s Law:
• Objects observed in deep space – extragalactic space, 10 megaparsecs (Mpc) or more – are found to have a redshift, interpreted as a relative velocity away from Earth
• This Doppler shift-measured velocity of various galaxies receding from the Earth is approximately proportional to their distance from the Earth for galaxies up to a few hundred megaparsecs away
• Richard Tolman (1881-1948) took Friedmann’s idea of an oscillating universe and showed that the increased entropy of each universe would accumulate, meaning that each successive ‘bounce’ would get bigger; he also investigated what ‘lumpy’ universes would look like where matter is not evenly spaced but clumped: some parts of the universe might reach a maximum and start contracting while others wouldn’t; some parts might have had a big bang origin, others might not have
• Arthur Milne (1896-1950) showed that the tension between the outward exploding force posited by Einstein’s cosmological constant and the gravitational contraction could actually be described using just Newtonian mathematics: ‘Milne’s universe is the simplest possible universe with the assumption that the universe s uniform in space and isotropic’, a ‘rational’ and consistent geometry of space – Milne labelled the assumption of Einsteinian physics that the universe is the same in all places the Cosmological Principle
• Edmund Fournier d’Albe (1868-1933) posited that the universe has a hierarchical structure from atoms to the solar system and beyond
• Carl Charlier (1862-1934) introduced a mathematical description of a never-ending hierarchy of clusters
• Karl Schwarzschild (1873-1916) suggested that the geometry of the universe is not flat as Euclid had taught, but might be curved as in the non-Euclidean geometries developed by mathematicians Riemann, Gauss, Bolyai and Lobachevski in the early 19th century
• Franz Selety (1893-1933) devised a model for an infinitely large hierarchical universe which contained an infinite mass of clustered stars filling the whole of space, yet with a zero average density and no special centre
• Edward Kasner (1878-1955) a mathematician interested solely in finding mathematical solutions to Einstein’s equations, Kasner came up with a new idea, that the universe might expand at different rates in different directions, in some parts it might shrink, changing shape to look like a vast pancake
• Paul Dirac (1902-84) developed a Large Number Hypothesis that the really large numbers which are taken as constants in Einstein’s and other astrophysics equations are linked at a deep undiscovered level, among other things abandoning the idea that gravity is a constant: soon disproved
• Pascual Jordan (1902-80) suggested a slight variation of Einstein’s theory which accounted for a varying constant of gravitation as through it were a new source of energy and gravitation
• Robert Dicke (1916-97) developed an alternative theory of gravitation
• Nathan Rosen (1909-995) young assistant to Einstein in America with whom he authored a paper in 1936 describing a universe which expands but has the symmetry of a cylinder, a theory which predicted the universe would be washed over by gravitational waves
• Ernst Straus (1922-83) another young assistant to Einstein with whom he developed a new model, an expanding universe like those of Friedman and Lemaître but which had spherical holes removed like the bubbles in an Aero, each hole with a mass at its centre equal to the matter which had been excavated to create the hole
• Eugene Lifschitz (1915-85) in 1946 showed that very small differences in the uniformity of matter in the early universe would tend to increase, an explanation of how the clumpy universe we live in evolved from an almost but not quite uniform distribution of matter – as we have come to understand that something like this did happen, Lifshitz’s calculations have come to be seen as a landmark
• Kurt Gödel (1906-1978) posited a rotating universe which didn’t expand and, in theory, permitted time travel!
• Hermann Bondi, Thomas Gold and Fred Hoyle collaborated on the steady state theory of a universe which is growing but remains essentially the same, fed by the creation of new matter out of nothing
• George Gamow (1904-68)
• Ralph Alpher and Robert Herman in 1948 showed that the ratio of the matter density of the universe to the cube of the temperature of any heat radiation present from its hot beginning is constant if the expansion is uniform and isotropic – they calculated the current radiation temperature should be 5 degrees Kelvin – ‘one of the most momentous predictions ever made in science’
• Abraham Taub (1911-99) made a study of all the universes that are the same everywhere in space but can expand at different rates in different directions
• Charles Misner (b.1932) suggested ‘chaotic cosmology’ i.e. that no matter how chaotic the starting conditions, Einstein’s equations prove that any universe will inevitably become homogenous and isotropic – disproved by the smoothness of the background radiation. Misner then suggested the Mixmaster universe, the most complicated interpretation of the Einstein equations in which the universe expands at different rates in different directions and the gravitational waves generated by one direction interferes with all the others, with infinite complexity
• Hannes Alfvén devised a matter-antimatter cosmology
• Alan Guth (b.1947) in 1981 proposed a theory of ‘inflation’, that milliseconds after the big bang the universe underwent a swift process of hyper-expansion: inflation answers at a stroke a number of technical problems prompted by conventional big bang theory; but had the unforeseen implication that, though our region is smooth, parts of the universe beyond our light horizon might have grown from other areas of inflated singularity and have completely different qualities
• Andrei Linde (b.1948) extrapolated that the inflationary regions might create sub-regions in which further inflation might take place, so that a potentially infinite series of new universes spawn new universes in an ‘endlessly bifurcating multiverse’. We happen to be living in one of these bubbles which has lasted long enough for the heavy elements and therefore life to develop; who knows what’s happening in the other bubbles?
• Ted Harrison (1919-2007) British cosmologist speculated that super-intelligent life forms might be able to develop and control baby universe, guiding the process of inflation so as to promote the constants require for just the right speed of growth to allow stars, planets and life forms to evolve. Maybe they’ve done it already. Maybe we are the result of their experiments.
• Nick Bostrom (b.1973) Swedish philosopher: if universes can be created and developed like this then they will proliferate until the odds are that we are living in a ‘created’ universe and, maybe, are ourselves simulations in a kind of multiverse computer simulation
Although the arrival of Einstein and his theory of relativity marks a decisive break with the tradition of Newtonian physics, and comes at page 47 of this 300-page book, it seemed to me the really decisive break comes on page 198 with the publication Alan Guth’s theory of inflation.
Up till the Guth breakthrough, astrophysicists and astronomers appear to have focused their energy on the universe we inhabit. There were theoretical digressions into fantasies about other worlds and alternative universes but they appear to have been personal foibles and everyone agreed they were diversions from the main story.
However, the idea of inflation, while it solved half a dozen problems caused by the idea of a big bang, seems to have spawned a literally fantastic series of theories and speculations.
Throughout the twentieth century, cosmologists grew used to studying the different types of universe that emerged from Einstein’s equations, but they expected that some special principle, or starting state, would pick out one that best described the actual universe. Now, unexpectedly, we find that there might be room for many, perhaps all, of these possible universes somewhere in the multiverse. (p.254)
This is a really massive shift and it is marked by a shift in the tone and approach of Barrow’s book. Up till this point it had jogged along at a brisk rate namechecking a steady stream of mathematicians, physicists and explaining how their successive models of the universe followed on from or varied from each other.
Now this procedure comes to a grinding halt while Barrow enters a realm of speculation. He discusses the notion that the universe we live in might be a fake, evolved from a long sequence of fakes, created and moulded by super-intelligences for their own purposes.
Each of us might be mannequins acting out experiments, observed by these super-intelligences. In which case what value would human life have? What would be the definition of free will?
Maybe the discrepancies we observe in some of the laws of the universe have been planted there as clues by higher intelligences? Or maybe, over vast periods of time, and countless iterations of new universes, the laws they first created for this universe where living intelligences could evolve have slipped, revealing the fact that the whole thing is a facade.
These super-intelligences would, of course, have computers and technology far in advance of ours etc. I felt like I had wandered into a prose version of The Matrix and, indeed, Barrow apologises for straying into areas normally associated with science fiction (p.241).
Imagine living in a universe where nothing is original. Everything is a fake. No ideas are ever new. There is no novelty, no originality. Nothing is ever done for the first time and nothing will ever be done for the last time… (p.244)
And so on. During this 15-page-long fantasy the handy sequence of physicists comes to an end as he introduces us to contemporary philosophers and ethicists who are paid to think about the problem of being a simulated being inside a simulated reality.
Take Robin Hanson (b.1959), a research associate at the Future of Humanity Institute of Oxford University who, apparently, advises us all that we ought to behave so as to prolong our existence in the simulation or, hopefully, ensure we get recreated in future iterations of the simulation.
Are these people mad? I felt like I’d been transported into an episode of The Outer Limits or was back with my schoolfriend Paul, lying in a summer field getting stoned and wondering whether dandelions were a form of alien life that were just biding their time till they could take over the world. Why not, man?
I suppose Barrow has to include this material, and explain the nature of the anthropic principle (p.250), and go on to a digression about the search for extra-terrestrial life (p.248), and discuss the ‘replication paradox’ (in an infinite universe there will be infinite copies of you and me in which we perform an infinite number of variations on our lives: what would happen if you came face to face with one of your ‘copies?? p.246) – because these are, in their way, theories – if very fantastical theories – about the nature of the universe and he his stated aim is to be completely comprehensive.
The anthropic principle Observations of the universe must be compatible with the conscious and intelligent life that observes it. The universe is the way it is, because it has to be the way it is in order for life forms like us to evolve enough to understand it.
Still, it was a relief when he returned from vague and diffuse philosophical speculation to the more solid territory of specific physical theories for the last forty or so pages of the book. But it was very noticeable that, as he came up to date, the theories were less and less attached to individuals: modern research is carried out by large groups. And he increasingly is describing the swirl of ideas in which cosmologists work, which often don’t have or need specific names attached. And this change is denoted, in the texture of the prose, by an increase in the passive voice, the voice in which science papers are written: ‘it was observed that…’, ‘it was expected that…’, and so on.
• Edward Tryon (b.1940) American particle physicist speculated that the entire universe might be a virtual fluctuation from the quantum vacuum, governed by the Heisenberg Uncertainty Principle that limits our simultaneous knowledge of the position and momentum, or the time of occurrence and energy, of anything in Nature.
• George Ellis (b.1939) created a catalogue of ‘topologies’ or shapes which the universe might have
• Dmitri Sokolov and Victor Shvartsman in 1974 worked out what the practical results would be for astronomers if we lived in a strange shaped universe, for example a vast doughnut shape
• Yakob Zeldovich and Andrei Starobinsky in 1984 further explored the likelihood of various types of ‘wraparound’ universes, predicting the fluctuations in the cosmic background radiation which might confirm such a shape
• 1967 the Wheeler-De Witt equation – a first attempt to combine Einstein’s equations of general relativity with the Schrödinger equation that describes how the quantum wave function changes with space and time
• the ‘no boundary’ proposal – in 1982 Stephen Hawking and James Hartle used ‘an elegant formulation of quantum mechanics introduced by Richard Feynman to calculate the probability that the universe would be found to be in a particular state. What is interesting is that in this theory time is not important; time is a quality that emerges only when the universe is big enough for quantum effects to become negligible; the universe doesn’t technically have a beginning because the nearer you approach to it, time disappears, becoming part of four-dimensional space. This ‘no boundary’ state is the centrepiece of Hawking’s bestselling book A Brief History of Time (1988). According to Barrow, the Hartle-Hawking model was eventually shown to lead to a universe that was infinitely large and empty i.e. not our one.
The Hartle-Hawking no boundary Hartle and Hawking No-Boundary Proposal
• In 1986 Barrow proposed a universe with a past but no beginning because all the paths through time and space would be very large closed loops
• In 1997 Richard Gott and Li-Xin Li took the eternal inflationary universe postulated above and speculated that some of the branches loop back on themselves, giving birth to themselves
The self-creating universe of J.Richard Gott III and Li-Xin Li
• In 2001 Justin Khoury, Burt Ovrut, Paul Steinhardt and Neil Turok proposed a variation of the cyclic universe which incorporated strong theory and they called the ‘ekpyrotic’ universe, epkyrotic denoting the fiery flame into which each universe plunges only to be born again in a big bang. The new idea they introduced is that two three-dimensional universes may approach each other by moving through the additional dimensions posited by strong theory. When they collide they set off another big bang. These 3-D universes are called ‘braneworlds’, short for membrane, because they will be very thin
• If a universe existing in a ‘bubble’ in another dimension ‘close’ to ours had ever impacted on our universe, some calculations indicate it would leave marks in the cosmic background radiation, a stripey effect.
• In 1998 Andy Albrecht, João Maguijo and Barrow explored what might have happened if the speed of light, the most famous of cosmological constants, had in fact decreased in the first few milliseconds after the bang? There is now an entire suite of theories known as ‘Varying Speed of Light’ cosmologies.
• Modern ‘String Theory’ only functions if it assumes quite a few more dimensions than the three we are used to. In fact some string theories require there to be more than one dimension of time. If there are really ten or 11 dimensions then, possibly, the ‘constants’ all physicists have taken for granted are only partial aspects of constants which exist in higher dimensions. Possibly, they might change, effectively undermining all of physics.
• The Lambda-CDM model is a cosmological model in which the universe contains three major components: 1. a cosmological constant denoted by Lambda (Greek Λ) and associated with dark energy; 2. the postulated cold dark matter (abbreviated CDM); 3. ordinary matter. It is frequently referred to as the standard model of Big Bang cosmology because it is the simplest model that provides a reasonably good account of the following properties of the cosmos:
• the existence and structure of the cosmic microwave background
• the large-scale structure in the distribution of galaxies
• the abundances of hydrogen (including deuterium), helium, and lithium
• the accelerating expansion of the universe observed in the light from distant galaxies and supernovae
He ends with a summary of our existing knowledge, and indicates the deep puzzles which remain, not least the true nature of the ‘dark matter’ which is required to make sense of the expanding universe model. And he ends the whole book with a pithy soundbite. Speaking about the ongoing acceptance of models which posit a ‘multiverse’, in which all manner of other universes may be in existence, but beyond the horizon of where can see, he says:
Copernicus taught us that our planet was not at the centre of the universe. Now we may have to accept that even our universe is not at the centre of the Universe.
Related links
Reviews of other science books
The Environment
Genetics and life
Human evolution
Particle physics
%d bloggers like this: |
109741395c64c1cc | Institute for Nuclear Research
MTA Certificate of Research Excellence
Exactly solvable problems in quantum mechanics
Email address
Research area Quantum physics
Participants in the research work
Quantum systems are usually described in terms of the Schrödinger equation that contains a potential accounting for the interactions. The solutions are usually obtained by numerical techniques but sometimes they can be written in exact mathematical form. The importance of exactly solvable problems lies in the fact that they can be used to develop and test numerical techniques. Furthermore, they give insight into the fundamental aspects of quantum mechanics, e.g. the symmetries of the system.
The exact description of quantum mechanical potential problems is usually performed by transforming the Schrödinger equation into the second-order differential equation of some special function of mathematical physics. Depending on the choice of this function and on the variable transformation, various classes of exactly solvable potentials can be obtained. The most widely discussed potentials belong to the six-parameter Natanzon class, in which case the solutions of the Schrödinger equation are obtained in terms of a single hypergeometric or confluent hypergeometric function. The most widely known exactly solvable potentials (harmonic oscillator, Coulomb, Pöschl-Teller etc.) are two- or three-parameter subclasses of the Natanzon class.
Supersymmetric quantum mechanics (SUSYQM) is another standard tool of analysing exactly solvable potentials. This method can be used to generate new exactly solvable potentials from known ones such that the two potentials are isospectral, except perhaps for their ground states.
There are several ways to obtain exactly solvable potentials beyond the Natanzon class. One is considering further special functions of mathematical physics (e.g. the Bessel function, Heun functions etc.), while another one is considering the linear combination of several special functions of the same type in the solutions.
The variable transformation method, as well as the SUSY transformations can also be connected with group theoretical techniques, which allow for the discussion of various symmetry concepts related to the potentials.
PT-symmetric quantum mechanics introduced in 1998 is a non-hermitian formulation of quantum mechanics with complex potentials. The Hamiltonian of these systems is invariant under the simultaneous space (P) and time (T) inversion, and has real energy eigenvalues as long as PT symmetry is unbroken. The standard methods have been adapted to this theory too, leading to a large number of exactly solvable PT-symmetric potentials.
Bender, CM ; Lévai, G: Exactly Solvable PT-Symmetric Models
In: Tateo, Roberto; Lévai, Géza; Kuzhel, Sergii; Jones, Hugh F; Hook, Daniel W; Fring, Andreas; Dunning, Clare; Dorey, Patrick E; Bender, Carl M:
PT Symmetry in Quantum and Classical Physics
London, UK / World Scientific (Europe), (2019) pp. 221-260. , 40 p. |
4ecd6089ba592bef | Skip to main content
Self-similarity property of acoustic data acquired in shallow water environment
Underwater acoustic modeling in shallow water environment is difficult since sound waves reflect several times between the surface and the water bottom. This article discusses an underwater acoustic characteristics analysis method based on self-similarity. It is found that acoustic signal has good self-similarity in shallow water. The actual towed hydrophone linear array was established and it was used for underwater acoustic signal acquisition experiment in Qilihai Reservoir which is located in the suburb of Tianjin, China. It can be derived that the signals acquired by hydrophones have self-similarity by the analysis of the variance of m-aggregated time series. It is proved that the characteristics of self-similarity can be used for the sound pulse propagation in shallow water.
1. Introduction
The hydrophone array plays an increasingly important role in access to the ocean information. For example, active acoustic detection methods can also be used for real-time monitoring of marine fish density and behavior [13]. Compared with the traditional way, the method of ocean acoustic waveguide and hydrophone linear array can implement thousands of square kilometers of real-time imaging, and continuous monitoring in specified sea water. As a type of active acoustic detection, underwater seismic exploration is widely used in the detection of potential seabed oil, natural gas resources reservoir, combustible ice, and other resources. Norwegian Gullfaks oil field went into operation in 1986. In order to improve the efficiency of field collection, the four time-lapse seismic measurements were fulfilled in 1985 (baseline data), 1995, 1996, and 1999, respectively. Through time-lapse seismic data, they carried out analysis and forecasting the movement of water injected. As a result, the recovery factor of oil fields to increase about 2% [4]. In the monitoring oilfield mined, seismic data can also be used. Swanston et al. [5] compared the seismic data of a drilling platform in the Gulf of Mexico in before and after 8 years of mining, the difference of sound respond between water and the hydrocarbon compound can be used for resources monitoring [5]. Acoustic data also can be used for monitoring carbon fixation in the deep ocean. By analyzing the data of 1994, 1999, and 2001 in the same seismic reflection exploration region, it can clearly draw the conclusion that data reflected CO2 changes [6].
Results of recent studies have proved that many natural and artificial systems have self-similarity [716]. It is very useful that self-similarity is a considerable key-property in understanding of widely existing nonlinear physical systems. By dividing complex networks into boxes containing nodes within a given ‘size’, Song et al. [7] find that complex networks have the scale-free nature, which is one of the evidences in the self-similarity confirming. Self-similarity is a common characteristic of many communication networks [810]. For example, the traffic of World Wide Web, as a typical communication network, also has the characteristic of self-similarity [9]. Self-similarity is also in the Ad Hoc wireless network transmission. As a result, the traffic in Ad Hoc networks can be predicted by methods such as fuzzy logic system [10]. Fermann [11] studied the pulse propagation characteristics in nonlinear Schrödinger equation optic–fiber amplifier through the self-similarity method. From the study of Liang [12], it can be concluded that the signals of ultra-wideband radar do not have self-similarity. Considering the case of the multi-scale structure of the sea and the sea surface shape changes over time, Guan et al.[13] established the one-dimensional fractal sea surface model and simulated the wave motions with the time variation using close-form solution of sea wave nonlinear differential equations. Qian [14] used Koch curve, instead of sine curve, to model the sea surface scattering properties with sound waves and electromagnetic waves approximately.
The main purpose of this article is to verify whether the acoustic signals in shallow water have self-similarity. We have established a hydrophone line array with 24 sensors in shallow water, and actual data acquisition experiment was conducted in Qilihai Reservoir, Tianjin, China. In Section 2, we introduced the property of hydrophone linear array and data acquisition experiments conducted in Qilihai Reservoir. Then the waveform of the original data sequence was presented. In Section 3, we analyzed the self-similarity of the multi-channel hydrophone data sequence by variance-time plot.
2. Workbench testing and field data acquisition experiment
As shown in Figure 1, we implemented an m-aggregated hydrophone linear array and collected the acoustic signals in 4,000 samples per second. The actual system components and test instruments are listed below.
• 32channel hydrophones;
• 24-bits analog–digital converter;
• hydrophone space is 2 m;
• 40-m flexible segment;
• PCI-interface data reception card;
• data real-time storage based on double ping-pang structure;
• Tektronix model 4104B oscilloscope.
Figure 1
Actual hydrophone line array debugged in processing workshop. Hydrophones were distributed uniformly on the tow rope laid on the ground. The PC in the upper right corner of the figure is mainly used to store real-time data and reflected echo.
The 32 sensors were evenly distributed within the exploration cable. Each node is responsible for a signal acquisition of 16 hydrophones. In order to ensure the synchronization of the collected data of each channel in the array, we designed a high-precision unified acquisition clock synchronous system of linear array. After sensing the changes of underwater sound intensity by hydrophones of each channel, signals were amplified and filtered through a fully differential conditioning circuit, then digitized by 8-channel Sigma-Delta ADC. At last these data were uploaded and stored in host computer through the cascade-type channel individually.
We use the temperature-compensated crystal oscillator in the head of hydrophone linear array to generate the high stability of the clock as the master clock. Then it was transmitted over unshielded twisted pair to every data acquisition unit (DAU). The slave clock source in every DAU is voltage-controlled temperature-compensated crystal oscillator (VCTCXO). DAU is not using VCTCXO in an open loop environment, but letting the acquisition circuits of the data output pulses and the master clock locked to zero retardation by using phase-locked loop. With high-precision sampling clock generation and transmission system, the array can acquire signals simultaneously at sub-microsecond level, which is important in offshore environment.
Actual data acquisition experiment was applied in the Qilihai Reservoir, as illustrated in Figure 2, in the eastern suburb of Tianjin. Average depth of the reservoir is about 4 m. The total area of it is approximately 16.26 km2. The bottom of the lake is slime layer, and has some ups and downs. Hydrophone line array is placed on the surface of water. Meanwhile, in the not far distance from the array, we placed a point-like sound source as the sound excitation source in the experiment.
Figure 2
Panorama of Qilihai Reservoir. The reservoir is located in the northeastern part of Tianjin, China, with an average depth of 4 m, freshwater quality, soft muddy bottom, part of the regional water plant distribution.
Characteristics of acoustic wave propagation in shallow water are more complex than that of in the deep water, because the ups and downs of the bottom of reservoir are analogous with the water depth. Sound reflection in underwater between surface and bottom is even more than deep water. Therefore, the shallow water acoustic modeling is more difficult. This article discusses the acoustic signal analysis method based on fractal, which can be used for forecasting or targeting an underwater artificial signal. Thereby, it will reduce the difficulty of establishing acoustic propagation model of shallow water environment.
In order to observe the wave form of the original data sequence of the experimental data acquisition, we read hydrophone CH1 data firstly. Figure 3 shows the original waveform of the time-domain sequence. Meanwhile, the data of the power spectrum of the CH1 hydrophone was shown in Figure 4.
Figure 3
Original waveform signal of CH1 of the hydrophone in Qilihai Reservoir. Left for a point-like sound source excitation of direct wave and the reflected echo pulses.
Figure 4
Power spectrum of CH1 of the hydrophone line array in Qilihai Reservoir.
3. Self-similarity properties of acoustic signals
For a detailed explanation of self-similarity in time series, see [814]. We can discuss its definition briefly here. Our discussion in this section and the next closely follows those sources.
We begin with a zero-mean, stationary time series X = (X t , t = 1, 2, 3,…), in which the t is a semi-infinite discrete argument. Then, we define the m-aggregated series X m = Δ X k m ; k = 1 , 2 , 3 , . The symbol = Δ means the equality by definition. The way of construction of m-aggregated series is summing non-overlapping elements in the original series X of size m. In fact, the self-similarity of a time series means the series have the long-term dependence. At this point, the sequence has the same autocorrelation function.
γ k k β ; k , 0 < β < 1
Accordance with established practice, the parameter H is the Hurst parameter, which is calculated as H = 1 β 2 .
If for all positive integer m, X(m) has the same distribution as X rescaled by mH, we can say that X is H-self-similar. That is,
X t = Δ m H i = t 1 m + 1 tm X i ; m N
In order to be able to verify whether a sequence is H-self-similar, we can use the variance-time plot method [9]. It mainly reflects the slowly decline variance of a self-similar sequences when parameter m increases continuously.
The concrete steps are listed below
• Preprocesses sequence X to meet the requirement of zero mean and unit variance;
• For different values of m (starting from two until a relatively large positive integer), generates a plurality of sequences;
• Calculates the variance of the sequences X(m), respectively, and takes the log values;
• Draws variance–time plot, the variance of X(m) is plotted against m in a log–log two-dimensional coordinate system.
• If the variance is all above in the slope of –β, then the series X has self-correlation; otherwise, the series has no self-correlation characteristics.
If time series X still has self-similarity when m is a large integer, the sequence X can be said to have “long-range dependence.” Namely, in the seemingly haphazard sequence, the sequence values after a long time are closely associated with the current value of the sequence. In other words, the time series X elements value x0 on the impact of the subsequent element values x t “to be extended to the infinite.” This feature is also one of the potential applications of self-similarity in the field of underwater acoustic detection and target identification.
The variance–time plot of the CH1 hydrophone data sequence shown in Figure 5 showed a strong self-correlation in the dataset of the channel.
Figure 5
Variance–time plot of CH1 of the hydrophone line array.
To further analyze the underwater acoustic signal from the universality of self-similarity, we selected and analysis the data sequence of CH2–CH5 hydrophones in the same time period, mainly to study the original sequence and variance–time law results.
The original data acquired by the hydrophone array from the shallow environment are analyzed and processed by the following steps. As shown in Figure 6, the mean values of the four channels data are: μ1 = -1.0823, μ2 = 0.0138, μ3 = -0.9440, μ4 = -1.1881, respectively. Underwater acoustic data collected by hydrophone is not zero mean. The absolute amplitude of the sequence values show greater impact on variance. All these factors do not comply with the self-similarity judgments method for sequence. So, the process of zero mean and unit variance are the premise. Then square differences of different m values are calculated in accordance with the method used by the literature [9, 10, 14], in order to validate the self-similarity of the data.
Figure 6
Experimental acquisition of the original waveform signal in Qilihai Reservoir. Left for a point-like sound source excitation of direct wave and the reflected echo pulses, the synchronization performance is good. In the same time, after the strong signal disappears, it is possible to see that there is a large difference due to the environmental impact of the shallow water acoustic signal waveform. It is difficult to analyze signal characteristics by the classic methods.
It can be seen from the Figure 7 below that the water acoustic data in the shallow water environment not only has strong self-similarity, but also has a characteristic of long-timedependence. Therefore, waveform changes long after can be predicted through the existing data. Thus, we can reduce the requirements of sound field of a shallow water environment modeling before the data analysis.
Figure 7
Variance-time plots from CH1–CH4 of the hydrophone line array. The figure shows that in the shallow water environment, acoustic signal of each channel has good self-similarity. The straight line from the upper left zero point to the lower right corner indicates that the slope of a boundary of 0.5. If the variance is located below of the slash, said they did not have the self-similarity, otherwise have self-similarity.
4. Conclusion
Many research results show that the self-similarity is a common phenomenon in the optical fiber, World Wide Web,Ad Hoc Networks, and other communication network traffic data, and other complex systems. In this article, we discuss the self-similarity of underwater acoustic signals collected by towed hydrophone array in shallow water environment. We have collected the same period for four different channel underwater acoustic signal variance–time plot graphics, verified its stable self-similar characteristics.
One of the possible reasons for the self-similarity of underwater acoustic signal from shallow waters is the fractal characteristics in the scattering properties of surface and bottom of waters. The nature of water itself has fractal characteristics. After the reflection of the water surface and underwater, the sound signal received by hydrophone line array will contain self-similarity, as the reflective medium has self-similar fractal characteristics. In this article, the underwater acoustic signals’ self-similarity is studied in the reservoir environment through the analysis of the actual experimental data. It is proved in this article that the underwater acoustic signals have long-range dependence, which laid the foundation for the future research of underwater target detection and signal processing method from the self-similarity aspect.
1. 1.
Nicholas MC, Purnima R, Deanelle ST, Srinivasan J, Lee S, Redwood NW: Fish population and behavior revealed by instantaneous continental shelf-scale imaging. Science 2006, 311: 660-663. 10.1126/science.1121756
2. 2.
Nicholas MC, Purnima R, Srinivasan J, Zheng G, Mark A, Ioannis B, Olav GR, Redwood NW, Michael JJ: Critical population density triggers rapid formation of vast oceanic fish shoals. Science 2009, 323: 1734-1737. 10.1126/science.1169441
3. 3.
Nicholas MC, Srinivasan J, Anamaria I: Ocean acoustic waveguide remote sensing: visualizing life around seamounts. Oceanography 2010, 23(1):204-205. 10.5670/oceanog.2010.95
4. 4.
Landrø M, Strønen LK, Digranes P, Solheim OA, Hilde E: Time-lapse seismic as a complementary tool for in-fill drilling. J. Petrol. Sci. Eng. 2001, 31: 81-92. 10.1016/S0920-4105(01)00122-X
5. 5.
Swanston AM, Flemings PB, Comisky JT, Best DP: Time-lapse imaging at Bullwinkle Field, Green Canyon 65, offshore Gulf of Mexico. Geophysics 2003, 68(5):1470-1484. 10.1190/1.1620620
6. 6.
Arts R, Eiken O, Chadwick A, Zweigel P, van der Meer L, Zinszner B: Monitoring of CO2 injected at Sleipner using time-lapse seismic data. Energy 2004, 29: 1383-1392. 10.1016/
7. 7.
Song C, Havlin S, Makse HA: Self-similarity of complex networks. Nature 2005, 433: 392-395. 10.1038/nature03248
8. 8.
Song S, Ng JK: Some results on the self-similarity property in communication networks. IEEE Trans. Commun. 2004, 52(10):1636-1642. 10.1109/TCOMM.2004.833136
9. 9.
Crovella ME: Self-similarity in world wide web traffic: evidence and possible causes. IEEE ACM Trans. Network 1997, 5(6):835-846. 10.1109/90.650143
10. 10.
Liang QL: Ad hoc wireless network traffic—self-similarity and forecasting. IEEE Commun. Lett. 2002, 6(7):297-299.
11. 11.
Fermann ME: Self-similar propagation and amplification of parabolic pulses in optical fibers. Phys. Rev. Lett. 2000, 84(26):6010-6013. 10.1103/PhysRevLett.84.6010
12. 12.
Liang Q: Radar sensor wireless channel modeling in foliage environment: UWB versus narrowband. IEEE Sens. J. 2011, 11(6):1448-1457.
13. 13.
Guan J, Liu NB, Huang Y: Fractal Theory and Applications of Radar Target Detection Publishing. House of Electronics Industry: Beijing; 2011.
14. 14.
Qian ZW: The Nonlinear Acoustics. 2nd edition. Beijing: Science Press; 2009.
15. 15.
Unser M, Blu T: Self-similarity: part I—splines and operators. IEEE Trans. Signal Process. 2007, 55(4):1352-1363.
16. 16.
Unser M, Blu T: Self-similarity: part II—optimal estimation of fractal processes. IEEE Trans. Signal Process. 2007, 55(4):1364-1378.
Download references
This study was supported by the grants Program for New Century Excellent Talents in University (NCET), TOA (KX2010-0006), TSTC (11ZCKFGX03600), DFTJNU (52XK1206) , and MYATRP(10zd2114) in China.
Author information
Correspondence to Ying Tong.
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Rights and permissions
Reprints and Permissions
About this article
Cite this article
Chen, J., Duan, F., Jiang, J. et al. Self-similarity property of acoustic data acquired in shallow water environment. J Wireless Com Network 2013, 91 (2013).
Download citation
• Fractal
• Self-similarity
• Towed hydrophone linear array
• Shallow water environment |
0a3adfb29662f594 | The Imaginary Energy Space
Original post:
Intriguing title, isn’t it? You’ll think this is going to be highly speculative and you’re right. In fact, I could also have written: the imaginary action space, or the imaginary momentum space. Whatever. It all works ! It’s an imaginary space – but a very real one, because it holds energy, or momentum, or a combination of both, i.e. action. 🙂
So the title is either going to deter you or, else, encourage you to read on. I hope it’s the latter. 🙂
In my post on Richard Feynman’s exposé on how Schrödinger got his famous wave equation, I noted an ambiguity in how he deals with the energy concept. I wrote that piece in February, and we are now May. In-between, I looked at Schrödinger’s equation from various perspectives, as evidenced from the many posts that followed that February post, which I summarized on my Deep Blue page, where I note the following:
1. The argument of the wavefunction (i.e. θ = ωt – kx = [E·t – p·x]/ħ) is just the proper time of the object that’s being represented by the wavefunction (which, in most cases, is an elementary particle—an electron, for example).
2. The 1/2 factor in Schrödinger’s equation (∂ψ/∂t = i·(ħ/2m)·∇2ψ) doesn’t make all that much sense, so we should just drop it. Writing ∂ψ/∂t = i·(m/ħ)∇2ψ (i.e. Schrödinger’s equation without the 1/2 factor) does away with the mentioned ambiguities and, more importantly, avoids obvious contradictions.
Both remarks are rather unusual—especially the second one. In fact, if you’re not shocked by what I wrote above (Schrödinger got something wrong!), then stop reading—because then you’re likely not to understand a thing of what follows. 🙂 In any case, I thought it would be good to follow up by devoting a separate post to this matter.
The argument of the wavefunction as the proper time
Frankly, it took me quite a while to see that the argument of the wavefunction is nothing but the t’ = (t − v∙x)/√(1−v2)] formula that we know from the Lorentz transformation of spacetime. Let me quickly give you the formulas (just substitute the for v):
In fact, let me be precise: the argument of the wavefunction also has the particle’s rest mass m0 in it. That mass factor (m0) appears in it as a general scaling factor, so it determines the density of the wavefunction both in time as well as in space. Let me jot it down:
ψ(x, t) = a·ei·(mv·t − p∙x) = a·ei·[(m0/√(1−v2))·t − (m0·v/√(1−v2))∙x] = a·ei·m0·(t − v∙x)/√(1−v2)
Huh? Yes. Let me show you how we get from θ = ωt – kx = [E·t – p·x]/ħ to θ = mv·t − p∙x. It’s really easy. We first need to choose our units such that the speed of light and Planck’s constant are numerically equal to one, so we write: = 1 and ħ = 1. So now the 1/ħ factor no longer appears.
[Let me note something here: using natural units does not do away with the dimensions: the dimensions of whatever is there remain what they are. For example, energy remains what it is, and so that’s force over distance: 1 joule = 1 newton·meter (1 J = 1 N·m. Likewise, momentum remains what it is: force times time (or mass times velocity). Finally, the dimension of the quantum of action doesn’t disappear either: it remains the product of force, distance and time (N·m·s). So you should distinguish between the numerical value of our variables and their dimension. Always! That’s where physics is different from algebra: the equations actually mean something!]
Now, because we’re working in natural units, the numerical value of both and cwill be equal to 1. It’s obvious, then, that Einstein’s mass-energy equivalence relation reduces from E = mvc2 to E = mv. You can work out the rest yourself – noting that p = mv·v and mv = m0/√(1−v2). Done! For a more intuitive explanation, I refer you to the above-mentioned page.
So that’s for the wavefunction. Let’s now look at Schrödinger’s wave equation, i.e. that differential equation of which our wavefunction is a solution. In my introduction, I bluntly said there was something wrong with it: that 1/2 factor shouldn’t be there. Why not?
What’s wrong with Schrödinger’s equation?
When deriving his famous equation, Schrödinger uses the mass concept as it appears in the classical kinetic energy formula: K.E. = m·v2/2, and that’s why – after all the complicated turns – that 1/2 factor is there. There are many reasons why that factor doesn’t make sense. Let me sum up a few.
[I] The most important reason is that de Broglie made it quite clear that the energy concept in his equations for the temporal and spatial frequency for the wavefunction – i.e. the ω = E/ħ and k = p/ħ relations – is the total energy, including rest energy (m0), kinetic energy (m·v2/2) and any potential energy (V). In fact, if we just multiply the two de Broglie (aka as matter-wave equations) and use the old-fashioned v = λ relation (so we write E as E = ω·ħ = (2π·f)·(h/2π) = f·h, and p as p = k·ħ = (2π/λ)·(h/2π) = h/λ and, therefore, we have = E/h and p = h/p), we find that the energy concept that’s implicit in the two matter-wave equations is equal to E = m∙v2, as shown below:
Huh? E = m∙v2? Yes. Not E = m∙c2 or m·v2/2 or whatever else you might be thinking of. In fact, this E = m∙v2 formula makes a lot of sense in light of the two following points.
Skeptical note: You may – and actually should – wonder whether we can use that v = λ relation for a wave like this, i.e. a wave with both a real (cos(-θ)) as well as an imaginary component (i·sin(-θ). It’s a deep question, and I’ll come back to it later. But… Yes. It’s the right question to ask. 😦
[II] Newton told us that force is mass time acceleration. Newton’s law is still valid in Einstein’s world. The only difference between Newton’s and Einstein’s world is that, since Einstein, we should treat the mass factor as a variable as well. We write: F = mv·a = mv·= [m0/√(1−v2)]·a. This formula gives us the definition of the newton as a force unit: 1 N = 1 kg·(m/s)/s = 1 kg·m/s2. [Note that the 1/√(1−v2) factor – i.e. the Lorentz factor (γ) – has no dimension, because is measured as a relative velocity here, i.e. as a fraction between 0 and 1.]
Now, you’ll agree the definition of energy as a force over some distance is valid in Einstein’s world as well. Hence, if 1 joule is 1 N·m, then 1 J is also equal to 1 (kg·m/s2)·m = 1 kg·(m2/s2), so this also reflects the E = m∙v2 concept. [I can hear you mutter: that kg factor refers to the rest mass, no? No. It doesn’t. The kg is just a measure of inertia: as a unit, it applies to both mas well as mv. Full stop.]
Very skeptical note: You will say this doesn’t prove anything – because this argument just shows the dimensional analysis for both equations (i.e. E = m∙v2 and E = m∙c2) is OK. Hmm… Yes. You’re right. 🙂 But the next point will surely convince you! 🙂
[III] The third argument is the most intricate and the most beautiful at the same time—not because it’s simple (like the arguments above) but because it gives us an interpretation of what’s going on here. It’s fairly easy to verify that Schrödinger’s equation, ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation (including the 1/2 factor to which I object), is equivalent to the following set of two equations:
[In case you don’t see it immediately, note that two complex numbers a + i·b and c + i·d are equal if, and only if, their real and imaginary parts are the same. However, here we have something like this: a + i·b = i·(c + i·d) = i·c + i2·d = − d + i·c (remember i= −1).]
Now, before we proceed (i.e. before I show you what’s wrong here with that 1/2 factor), let us look at the dimensions first. For that, we’d better analyze the complete Schrödinger equation so as to make sure we’re not doing anything stupid here by looking at one aspect of the equation only. The complete equation, in its original form, is:
schrodinger 5
Notice that, to simplify the analysis above, I had moved the and the ħ on the left-hand side to the right-hand side (note that 1/= −i, so −(ħ2/2m)/(i·ħ) = ħ/2m). Now, the ħfactor on the right-hand side is expressed in J2·s2. Now that doesn’t make much sense, but then that mass factor in the denominator makes everything come out alright. Indeed, we can use the mass-equivalence relation to express m in J/(m/s)2 units. So our ħ2/2m coefficient is expressed in (J2·s2)/[J/(m/s)2] = J·m2. Now we multiply that by that Laplacian operating on some scalar, which yields some quantity per square meter. So the whole right-hand side becomes some amount expressed in joule, i.e. the unit of energy! Interesting, isn’t it?
On the left-hand side, we have i and ħ. We shouldn’t worry about the imaginary unit because we can treat that as just another number, albeit a very special number (because its square is minus 1). However, in this equation, it’s like a mathematical constant and you can think of it as something like π or e. [Think of the magical formula: eiπ = i2 = −1.] In contrast, ħ is a physical constant, and so that constant comes with some dimension and, therefore, we cannot just do what we want. [I’ll show, later, that even moving it to the other side of the equation comes with interpretation problems, so be careful with physical constants, as they really mean something!] In this case, its dimension is the action dimension: J·s = N·m·s, so that’s force times distance times time. So we multiply that with a time derivative and we get joule once again (N·m·s/s = N·m = J), so that’s the unit of energy. So it works out: we have joule units both left and right in Schrödinger’s equation. Nice! Yes. But what does it mean? 🙂
Well… You know that we can – and should – think of Schrödinger’s equation as a diffusion equation – just like a heat diffusion equation, for example – but then one describing the diffusion of a probability amplitude. [In case you are not familiar with this interpretation, please do check my post on it, or my Deep Blue page.] But then we didn’t describe the mechanism in very much detail, so let me try to do that now and, in the process, finally explain the problem with the 1/2 factor.
The missing energy
There are various ways to explain the problem. One of them involves calculating group and phase velocities of the elementary wavefunction satisfying Schrödinger’s equation but that’s a more complicated approach and I’ve done that elsewhere, so just click the reference if you prefer the more complicated stuff. I find it easier to just use those two equations above:
The argument is the following: if our elementary wavefunction is equal to ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt), then it’s easy to proof that this pair of conditions is fulfilled if, and only if, ω = k2·(ħ/2m). [Note that I am omitting the normalization coefficient in front of the wavefunction: you can put it back in if you want. The argument here is valid, with or without normalization coefficients.] Easy? Yes. Check it out. The time derivative on the left-hand side is equal to:
∂ψ/∂t = −iω·iei(kx − ωt) = ω·[cos(kx − ωt) + i·sin(kx − ωt)] = ω·cos(kx − ωt) + iω·sin(kx − ωt)
And the second-order derivative on the right-hand side is equal to:
2ψ = ∂2ψ/∂x= i·k2·ei(kx − ωt) = k2·cos(kx − ωt) + i·k2·sin(kx − ωt)
So the two equations above are equivalent to writing:
1. Re(∂ψB/∂t) = −(ħ/2m)·Im(∇2ψB) ⇔ ω·cos(kx − ωt) = k2·(ħ/2m)·cos(kx − ωt)
2. Im(∂ψB/∂t) = (ħ/2m)·Re(∇2ψB) ⇔ ω·sin(kx − ωt) = k2·(ħ/2m)·sin(kx − ωt)
So both conditions are fulfilled if, and only if, ω = k2·(ħ/2m). You’ll say: so what? Well… We have a contradiction here—something that doesn’t make sense. Indeed, the second of the two de Broglie equations (always look at them as a pair) tells us that k = p/ħ, so we can re-write the ω = k2·(ħ/2m) condition as:
ω/k = vp = k2·(ħ/2m)/k = k·ħ/(2m) = (p/ħ)·(ħ/2m) = p/2m ⇔ p = 2m
You’ll say: so what? Well… Stop reading, I’d say. That p = 2m doesn’t make sense—at all! Nope! In fact, if you thought that the E = m·v2 is weird—which, I hope, is no longer the case by now—then… Well… This p = 2m equation is much weirder. In fact, it’s plain nonsense: this condition makes no sense whatsoever. The only way out is to remove the 1/2 factor, and to re-write the Schrödinger equation as I wrote it, i.e. with an ħ/m coefficient only, rather than an (1/2)·(ħ/m) coefficient.
Huh? Yes.
As mentioned above, I could do those group and phase velocity calculations to show you what rubbish that 1/2 factor leads to – and I’ll do that eventually – but let me first find yet another way to present the same paradox. Let’s simplify our life by choosing our units such that = ħ = 1, so we’re using so-called natural units rather than our SI units. [Again, note that switching to natural units doesn’t do anything to the physical dimensions: a force remains a force, a distance remains a distance, and so on.] Our mass-energy equivalence then becomes: E = m·c= m·1= m. [Again, note that switching to natural units doesn’t do anything to the physical dimensions: a force remains a force, a distance remains a distance, and so on. So we’d still measure energy and mass in different but equivalent units. Hence, the equality sign should not make you think mass and energy are actually the same: energy is energy (i.e. force times distance), while mass is mass (i.e. a measure of inertia). I am saying this because it’s important, and because it took me a while to make these rather subtle distinctions.]
Let’s now go one step further and imagine a hypothetical particle with zero rest mass, so m0 = 0. Hence, all its energy is kinetic and so we write: K.E. = mv·v/2. Now, because this particle has zero rest mass, the slightest acceleration will make it travel at the speed of light. In fact, we would expect it to travel at the speed, so mv = mc and, according to the mass-energy equivalence relation, its total energy is, effectively, E = mv = mc. However, we just said its total energy is kinetic energy only. Hence, its total energy must be equal to E = K.E. = mc·c/2 = mc/2. So we’ve got only half the energy we need. Where’s the other half? Where’s the missing energy? Quid est veritas? Is its energy E = mc or E = mc/2?
It’s just a paradox, of course, but one we have to solve. Of course, we may just say we trust Einstein’s E = m·c2 formula more than the kinetic energy formula, but that answer is not very scientific. 🙂 We’ve got a problem here and, in order to solve it, I’ve come to the following conclusion: just because of its sheer existence, our zero-mass particle must have some hidden energy, and that hidden energy is also equal to E = m·c2/2. Hence, the kinetic and the hidden energy add up to E = m·c2 and all is alright.
Huh? Hidden energy? I must be joking, right?
Well… No. Let me explain. Oh. And just in case you wonder why I bother to try to imagine zero-mass particles. Let me tell you: it’s the first step towards finding a wavefunction for a photon and, secondly, you’ll see it just amounts to modeling the propagation mechanism of energy itself. 🙂
The hidden energy as imaginary energy
I am tempted to refer to the missing energy as imaginary energy, because it’s linked to the imaginary part of the wavefunction. However, it’s anything but imaginary: it’s as real as the imaginary part of the wavefunction. [I know that sounds a bit nonsensical, but… Well… Think about it. And read on!]
Back to that factor 1/2. As mentioned above, it also pops up when calculating the group and the phase velocity of the wavefunction. In fact, let me show you that calculation now. [Sorry. Just hang in there.] It goes like this.
The de Broglie relations tell us that the k and the ω in the ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) wavefunction (i.e. the spatial and temporal frequency respectively) are equal to k = p/ħ, and ω = E/ħ. Let’s now think of that zero-mass particle once more, so we assume all of its energy is kinetic: no rest energy, no potential! So… If we now use the kinetic energy formula E = m·v2/2 – which we can also write as E = m·v·v/2 = p·v/2 = p·p/2m = p2/2m, with v = p/m the classical velocity of the elementary particle that Louis de Broglie was thinking of – then we can calculate the group velocity of our ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) wavefunction as:
Fine. Now the phase velocity. For the phase velocity of our ei(kx − ωt) wavefunction, we find:
So that’s only half of v: it’s the 1/2 factor once more! Strange, isn’t it? Why would we get a different value for the phase velocity here? It’s not like we have two different frequencies here, do we? Well… No. You may also note that the phase velocity turns out to be smaller than the group velocity (as mentioned, it’s only half of the group velocity), which is quite exceptional as well! So… Well… What’s the matter here? We’ve got a problem!
What’s going on here? We have only one wave here—one frequency and, hence, only one k and ω. However, on the other hand, it’s also true that the ei(kx − ωt) wavefunction gives us two functions for the price of one—one real and one imaginary: ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt). So the question here is: are we adding waves, or are we not? It’s a deep question. If we’re adding waves, we may get different group and phase velocities, but if we’re not, then… Well… Then the group and phase velocity of our wave should be the same, right? The answer is: we are and we aren’t. It all depends on what you mean by ‘adding’ waves. I know you don’t like that answer, but that’s the way it is, really. 🙂
Let me make a small digression here that will make you feel even more confused. You know – or you should know – that the sine and the cosine function are the same except for a phase difference of 90 degrees: sinθ = cos(θ + π/2). Now, at the same time, multiplying something with amounts to a rotation by 90 degrees, as shown below.
Hence, in order to sort of visualize what our ei(kx − ωt) function really looks like, we may want to super-impose the two graphs and think of something like this:
You’ll have to admit that, when you see this, our formulas for the group or phase velocity, or our v = λ relation, do no longer make much sense, do they? 🙂
Having said that, that 1/2 factor is and remains puzzling, and there must be some logical reason for it. For example, it also pops up in the Uncertainty Relations:
Δx·Δp ≥ ħ/2 and ΔE·Δt ≥ ħ/2
So we have ħ/2 in both, not ħ. Why do we need to divide the quantum of action here? How do we solve all these paradoxes? It’s easy to see how: the apparent contradiction (i.e. the different group and phase velocity) gets solved if we’d use the E = m∙v2 formula rather than the kinetic energy E = m∙v2/2. But then… What energy formula is the correct one: E = m∙v2 or m∙c2? Einstein’s formula is always right, isn’t it? It must be, so let me postpone the discussion a bit by looking at a limit situation. If v = c, then we don’t need to make a choice, obviously. 🙂 So let’s look at that limit situation first. So we’re discussing our zero-mass particle once again, assuming it travels at the speed of light. What do we get?
Well… Measuring time and distance in natural units, so c = 1, we have:
E = m∙c2 = m and p = m∙c = m, so we get: E = m = p
Waw ! E = m = p ! What a weird combination, isn’t it? Well… Yes. But it’s fully OK. [You tell me why it wouldn’t be OK. It’s true we’re glossing over the dimensions here, but natural units are natural units and, hence, the numerical value of c and c2 is 1. Just figure it out for yourself.] The point to note is that the E = m = p equality yields extremely simple but also very sensible results. For the group velocity of our ei(kx − ωt) wavefunction, we get:
So that’s the velocity of our zero-mass particle (remember: the 1 stands for c here, i.e. the speed of light) expressed in natural units once more—just like what we found before. For the phase velocity, we get:
However, if there’s hidden energy, we still need to show where it’s hidden. 🙂 Now that question is linked to the propagation mechanism that’s described by those two equations, which now – leaving the 1/2 factor out, simplify to:
1. Re(∂ψ/∂t) = −(ħ/m)·Im(∇2ψ)
2. Im(∂ψ/∂t) = (ħ/m)·Re(∇2ψ)
Propagation mechanism? Yes. That’s what we’re talking about here: the propagation mechanism of energy. Huh? Yes. Let me explain in another separate section, so as to improve readability. Before I do, however, let me add another note—for the skeptics among you. 🙂
Indeed, the skeptics among you may wonder whether our zero-mass particle wavefunction makes any sense at all, and they should do so for the following reason: if x = 0 at t = 0, and it’s traveling at the speed of light, then x(t) = t. Always. So if E = m = p, the argument of our wavefunction becomes E·t – p·x = E·t – E·t = 0! So what’s that? The proper time of our zero-mass particle is zero—always and everywhere!?
Well… Yes. That’s why our zero-mass particle – as a point-like object – does not really exist. What we’re talking about is energy itself, and its propagation mechanism. 🙂
While I am sure that, by now, you’re very tired of my rambling, I beg you to read on. Frankly, if you got as far as you have, then you should really be able to work yourself through the rest of this post. 🙂 And I am sure that – if anything – you’ll find it stimulating! 🙂
The imaginary energy space
Look at the propagation mechanism for the electromagnetic wave in free space, which (for = 1) is represented by the following two equations:
1. B/∂t = –∇×E
2. E/∂t = ∇×B
[In case you wonder, these are Maxwell’s equations for free space, so we have no stationary nor moving charges around.] See how similar this is to the two equations above? In fact, in my Deep Blue page, I use these two equations to derive the quantum-mechanical wavefunction for the photon (which is not the same as that hypothetical zero-mass particle I introduced above), but I won’t bother you with that here. Just note the so-called curl operator in the two equations above (∇×) can be related to the Laplacian we’ve used so far (∇2). It’s not the same thing, though: for starters, the curl operator operates on a vector quantity, while the Laplacian operates on a scalar (including complex scalars). But don’t get distracted now. Let’s look at the revised Schrödinger’s equation, i.e. the one without the 1/2 factor:
∂ψ/∂t = i·(ħ/m)·∇2ψ
On the left-hand side, we have a time derivative, so that’s a flow per second. On the right-hand side we have the Laplacian and the i·ħ/m factor. Now, written like this, Schrödinger’s equation really looks exactly the same as the general diffusion equation, which is written as: ∂φ/∂t = D·∇2φ, except for the imaginary unit, which makes it clear we’re getting two equations for the price of one here, rather than one only! 🙂 The point is: we may now look at that ħ/m factor as a diffusion constant, because it does exactly the same thing as the diffusion constant D in the diffusion equation ∂φ/∂t = D·∇2φ, i.e:
So the diffusion constant for Schrödinger’s equation is ħ/m. What is its dimension? That’s easy: (N·m·s)/(N·s2/m) = m2/s. [Remember: 1 N = 1 kg·m/s2.] But then we multiply it with the Laplacian, so that’s something expressed per square meter, so we get something per second on both sides.
Of course, you wonder: what per second? Not sure. That’s hard to say. Let’s continue with our analogy with the heat diffusion equation so as to try to get a better understanding of what’s being written here. Let me give you that heat diffusion equation here. Assuming the heat per unit volume (q) is proportional to the temperature (T) – which is the case when expressing T in degrees Kelvin (K), so we can write q as q = k·T – we can write it as:
heat diffusion 2
So that’s structurally similar to Schrödinger’s equation, and to the two equivalent equations we jotted down above. So we’ve got T (temperature) in the role of ψ here—or, to be precise, in the role of ψ ‘s real and imaginary part respectively. So what’s temperature? From the kinetic theory of gases, we know that temperature is not just a scalar: temperature measures the mean (kinetic) energy of the molecules in the gas. That’s why we can confidently state that the heat diffusion equation models an energy flow, both in space as well as in time.
Let me make the point by doing the dimensional analysis for that heat diffusion equation. The time derivative on the left-hand side (∂T/∂t) is expressed in K/s (Kelvin per second). Weird, isn’t it? What’s a Kelvin per second? Well… Think of a Kelvin as some very small amount of energy in some equally small amount of space—think of the space that one molecule needs, and its (mean) energy—and then it all makes sense, doesn’t it?
However, in case you find that a bit difficult, just work out the dimensions of all the other constants and variables. The constant in front (k) makes sense of it. That coefficient (k) is the (volume) heat capacity of the substance, which is expressed in J/(m3·K). So the dimension of the whole thing on the left-hand side (k·∂T/∂t) is J/(m3·s), so that’s energy (J) per cubic meter (m3) and per second (s). Nice, isn’t it? What about the right-hand side? On the right-hand side we have the Laplacian operator – i.e. ∇= ·, with ∇ = (∂/∂x, ∂/∂y, ∂/∂z) – operating on T. The Laplacian operator, when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it’s operating on T, so the dimension of ∇2T is K/m2. Again, that doesn’t tell us very much (what’s the meaning of a Kelvin per square meter?) but we multiply it by the thermal conductivity (κ), whose dimension is W/(m·K) = J/(m·s·K). Hence, the dimension of the product is the same as the left-hand side: J/(m3·s). So that’s OK again, as energy (J) per cubic meter (m3) and per second (s) is definitely something we can associate with an energy flow.
In fact, we can play with this. We can bring k from the left- to the right-hand side of the equation, for example. The dimension of κ/k is m2/s (check it!), and multiplying that by K/m(i.e. the dimension of ∇2T) gives us some quantity expressed in Kelvin per second, and so that’s the same dimension as that of ∂T/∂t. Done!
In fact, we’ve got two different ways of writing Schrödinger’s diffusion equation. We can write it as ∂ψ/∂t = i·(ħ/m)·∇2ψ or, else, we can write it as ħ·∂ψ/∂t = i·(ħ2/m)·∇2ψ. Does it matter? I don’t think it does. The dimensions come out OK in both cases. However, interestingly, if we do a dimensional analysis of the ħ·∂ψ/∂t = i·(ħ2/m)·∇2ψ equation, we get joule on both sides. Interesting, isn’t it? The key question, of course, is: what is it that is flowing here?
I don’t have a very convincing answer to that, but the answer I have is interesting—I think. 🙂 Think of the following: we can multiply Schrödinger’s equation with whatever we want, and then we get all kinds of flows. For example, if we multiply both sides with 1/(m2·s) or 1/(m3·s), we get a equation expressing the energy conservation law, indeed! [And you may want to think about the minus sign of the right-hand side of Schrödinger’s equation now, because it makes much more sense now!]
We could also multiply both sides with s, so then we get J·s on both sides, i.e. the dimension of physical action (J·s = N·m·s). So then the equation expresses the conservation of actionHuh? Yes. Let me re-phrase that: then it expresses the conservation of angular momentum—as you’ll surely remember that the dimension of action and angular momentum are the same. 🙂
And then we can divide both sides by m, so then we get N·s on both sides, so that’s momentum. So then Schrödinger’s equation embodies the momentum conservation law.
Isn’t it just wonderfulSchrödinger’s equation packs all of the conservation laws!:-) The only catch is that it flows back and forth from the real to the imaginary space, using that propagation mechanism as described in those two equations.
Now that is really interesting, because it does provide an explanation – as fuzzy as it may seem – for all those weird concepts one encounters when studying physics, such as the tunneling effect, which amounts to energy flowing from the imaginary space to the real space and, then, inevitably, flowing back. It also allows for borrowing time from the imaginary space. Hmm… Interesting! [I know I still need to make these points much more formally, but… Well… You kinda get what I mean, don’t you?]
To conclude, let me re-baptize my real and imaginary ‘space’ by referring to them to what they really are: a real and imaginary energy space respectively. Although… Now that I think of it: it could also be real and imaginary momentum space, or a real and imaginary action space. Hmm… The latter term may be the best. 🙂
Isn’t this all great? I mean… I could go on and on—but I’ll stop here, so you can freewheel around yourself. For example, you may wonder how similar that energy propagation mechanism actually is as compared to the propagation mechanism of the electromagnetic wave? The answer is: very similar. You can check how similar in one of my posts on the photon wavefunction or, if you’d want a more general argument, check my Deep Blue page. Have fun exploring! 🙂
So… Well… That’s it, folks. I hope you enjoyed this post—if only because I really enjoyed writing it. 🙂
OK. You’re right. I still haven’t answered the fundamental question.
So what about the 1/2 factor?
What about that 1/2 factor? Did Schrödinger miss it? Well… Think about it for yourself. First, I’d encourage you to further explore that weird graph with the real and imaginary part of the wavefunction. I copied it below, but with an added 45º line—yes, the green diagonal. To make it somewhat more real, imagine you’re the zero-mass point-like particle moving along that line, and we observe you from our inertial frame of reference, using equivalent time and distance units.
spacetime travel
So we’ve got that cosine (cosθ) varying as you travel, and we’ve also got the i·sinθ part of the wavefunction going while you’re zipping through spacetime. Now, THINK of it: the phase velocity of the cosine bit (i.e. the red graph) contributes as much to your lightning speed as the i·sinθ bit, doesn’t it? Should we apply Pythagoras’ basic r2 = x2 + yTheorem here? Yes: the velocity vector along the green diagonal is going to be the sum of the velocity vectors along the horizontal and vertical axes. So… That’s great.
Yes. It is. However, we still have a problem here: it’s the velocity vectors that add up—not their magnitudes. Indeed, if we denote the velocity vector along the green diagonal as u, then we can calculate its magnitude as:
u = √u2 = √[(v/2)2 + (v/2)2] = √[2·(v2/4) = √[v2/2] = v/√2 ≈ 0.7·v
So, as mentioned, we’re adding the vectors, but not their magnitudes. We’re somewhat better off than we were in terms of showing that the phase velocity of those sine and cosine velocities add up—somehow, that is—but… Well… We’re not quite there.
Fortunately, Einstein saves us once again. Remember we’re actually transforming our reference frame when working with the wavefunction? Well… Look at the diagram below (for which I thank the author)
special relativity
In fact, let me insert an animated illustration, which shows what happens when the velocity goes up and down from (close to) −c to +c and back again. It’s beautiful, and I must credit the author here too. It sort of speaks for itself, but please do click the link as the accompanying text is quite illuminating. 🙂
The point is: for our zero-mass particle, the x’ and t’ axis will rotate into the diagonal itself which, as I mentioned a couple of times already, represents the speed of light and, therefore, our zero-mass particle traveling at c. It’s obvious that we’re now adding two vectors that point in the same direction and, hence, their magnitudes just add without any square root factor. So, instead of u = √[(v/2)2 + (v/2)2], we just have v/2 + v/2 = v! Done! We solved the phase velocity paradox! 🙂
So… I still haven’t answered that question. Should that 1/2 factor in Schrödinger’s equation be there or not? The answer is, obviously: yes. It should be there. And as for Schrödinger using the mass concept as it appears in the classical kinetic energy formula: K.E. = m·v2/2… Well… What other mass concept would he use? I probably got a bit confused with Feynman’s exposé – especially this notion of ‘choosing the zero point for the energy’ – but then I should probably just re-visit the thing and adjust the language here and there. But the formula is correct.
Thinking it all through, the ħ/2m constant in Schrödinger’s equation should be thought of as the reciprocal of m/(ħ/2). So what we’re doing basically is measuring the mass of our object in units of ħ/2, rather than units of ħ. That makes perfect sense, if only because it’s ħ/2, rather than ħthe factor that appears in the Uncertainty Relations Δx·Δp ≥ ħ/2 and ΔE·Δt ≥ ħ/2. In fact, in my post on the wavefunction of the zero-mass particle, I noted its elementary wavefunction should use the m = E = p = ħ/2 values, so it becomes ψ(x, t) = a·ei∙[(ħ/2)∙t − (ħ/2)∙x]/ħ = a·ei∙[t − x]/2.
Isn’t that just nice? 🙂 I need to stop here, however, because it looks like this post is becoming a book. Oh—and note that nothing what I wrote above discredits my ‘hidden energy’ theory. On the contrary, it confirms it. In fact, the nice thing about those illustrations above is that it associates the imaginary component of our wavefunction with travel in time, while the real component is associated with travel in space. That makes our theory quite complete: the ‘hidden’ energy is the energy that moves time forward. The only thing I need to do is to connect it to that idea of action expressing itself in time or in space, cf. what I wrote on my Deep Blue page: we can look at the dimension of Planck’s constant, or at the concept of action in general, in two very different ways—from two different perspectives, so to speak:
1. [Planck’s constant] = [action] = N∙m∙s = (N∙m)∙s = [energy]∙[time]
2. [Planck’s constant] = [action] = N∙m∙s = (N∙s)∙m = [momentum]∙[distance]
Hmm… I need to combine that with the idea of the quantum vacuum, i.e. the mathematical space that’s associated with time and distance becoming countable variables…. In any case. Next time. 🙂
Before I sign off, however, let’s quickly check if our a·ei∙[t − x]/2 wavefunction solves the Schrödinger equation:
• ∂ψ/∂t = −a·ei∙[t − x]/2·(i/2)
• 2ψ = ∂2[a·ei∙[t − x]/2]/∂x= ∂[a·ei∙[t − x]/2·(i/2)]/∂x = −a·ei∙[t − x]/2·(1/4)
So the ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation becomes:
a·ei∙[t − x]/2·(i/2) = −i·(ħ/[2·(ħ/2)])·a·ei∙[t − x]/2·(1/4)
⇔ 1/2 = 1/4 !?
The damn 1/2 factor. Schrödinger wants it in his wave equation, but not in the wavefunction—apparently! So what if we take the m = E = p = ħ solution? We get:
• ∂ψ/∂t = −a·i·ei∙[t − x]
• 2ψ = ∂2[a·ei∙[t − x]]/∂x= ∂[a·i·ei∙[t − x]]/∂x = −a·ei∙[t − x]
So the ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation now becomes:
a·i·ei∙[t − x] = −i·(ħ/[2·ħ])·a·ei∙[t − x]
⇔ 1 = 1/2 !?
We’re still in trouble! So… Was Schrödinger wrong after all? There’s no difficulty whatsoever with the ∂ψ/∂t = i·(ħ/m)·∇2ψ equation:
• a·ei∙[t − x]/2·(i/2) = −i·[ħ/(ħ/2)]·a·ei∙[t − x]/2·(1/4) ⇔ 1 = 1
• a·i·ei∙[t − x] = −i·(ħ/ħ)·a·ei∙[t − x] ⇔ 1 = 1
What these equations might tell us is that we should measure mass, energy and momentum in terms of ħ (and not in terms of ħ/2) but that the fundamental uncertainty is ± ħ/2. That solves it all. So the magnitude of the uncertainty is ħ but it separates not 0 and ± 1, but −ħ/2 and −ħ/2. Or, more generally, the following series:
…, −7ħ/2, −5ħ/2, −3ħ/2, −ħ/2, +ħ/2, +3ħ/2,+5ħ/2, +7ħ/2,…
Why are we not surprised? The series represent the energy values that a spin one-half particle can possibly have, and ordinary matter – i.e. all fermions – is composed of spin one-half particles.
To conclude this post, let’s see if we can get any indication on the energy concepts that Schrödinger’s revised wave equation implies. We’ll do so by just calculating the derivatives in the ∂ψ/∂t = i·(ħ/m)·∇2ψ equation (i.e. the equation without the 1/2 factor). Let’s also not assume we’re measuring stuff in natural units, so our wavefunction is just what it is: a·ei·[E·t − p∙x]/ħ. The derivatives now become:
• ∂ψ/∂t = −a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ
• 2ψ = ∂2[a·ei∙[E·t − p∙x]/ħ]/∂x= ∂[a·i·(p/ħ)·ei∙[E·t − p∙x]/ħ]/∂x = −a·(p22ei∙[E·t − p∙x]/ħ
So the ∂ψ/∂t = i·(ħ/m)·∇2ψ = i·(1/m)·∇2ψ equation now becomes:
a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ = −i·(ħ/m)·a·(p22ei∙[E·t − p∙x]/ħ ⇔ E = p2/m = m·v2
It all works like a charm. Note that we do not assume stuff like E = m = p here. It’s all quite general. Also note that the E = p2/m closely resembles the kinetic energy formula one often sees: K.E. = m·v2/2 = m·m·v2/(2m) = p2/(2m). We just don’t have the 1/2 factor in our E = p2/m formula, which is great—because we don’t want it! :-) Of course, if you’d add the 1/2 factor in Schrödinger’s equation again, you’d get it back in your energy formula, which would just be that old kinetic energy formula which gave us all these contradictions and ambiguities. 😦
Finally, and just to make sure: let me add that, when we wrote that E = m = p – like we did above – we mean their numerical values are the same. Their dimensions remain what they are, of course. Just to make sure you get that subtle point, we’ll do a quick dimensional analysis of that E = p2/m formula:
[E] = [p2/m] ⇔ N·m = N2·s2/kg = N2·s2/[N·m/s2] = N·m = joule (J)
So… Well… It’s all perfect. 🙂
Post scriptum: I revised my Deep Blue page after writing this post, and I think that a number of the ideas that I express above are presented more consistently and coherently there. In any case, the missing energy theory makes sense. Think of it: any oscillator involves both kinetic as well as potential energy, and they both add up to twice the average kinetic (or potential) energy. So why not here? When everything is said and done, our elementary wavefunction does describe an oscillator. 🙂 |
943ba2ced0642e21 | The Time-Energy Uncertainty Relation
John Baez
April 10, 2010
In quantum mechanics we have an uncertainty relation between position and momentum:
(Δq) (Δp) ≥ /2
Now, as you probably know, time is to energy as position is to momentum, so it's natural to hope for a similar uncertainty relation between time and energy. Something like this:
(ΔT) (ΔE) ≥ /2
There's an energy operator in quantum mechanics, usually called the Hamiltonian and written H. But the problem is, there's no "time operator" in quantum mechanics! This makes people argue a lot about the time-energy uncertainty relation - whether it exists, what it would mean if it did exist, and so on.
A while back on sci.physics.research, Matthew Donald wrote something interesting about this subject. I'm editing it a little bit here:
Most treatments of the time-energy uncertainty principle point out that you do have to be careful to consider the meaning of t. t isn't an operator in quantum mechanics.
Uncertainty relations are mathematical theorems as well as physical statements so if we begin with a proof we should end up with an exact definition of what we are trying to understand.
There are probably several forms in which the time-energy uncertainty relation can be proved. Here's one (for the full details, see Messiah's Quantum Mechanics Section VIII.13).
Let H be the (time-independent) Hamiltonian of some non-relativistic system. Let ψ be a wavefunction and let A be some other observable. Write
<A> = <ψ, A ψ>
for the expectation value of A in the state ψ, write sqrt for square root, and define
ΔA = sqrt(<ψ, (A - <A>)2 ψ>).
ΔA is the standard deviation of the observable A in the state ψ.
Then, for all real numbers r, <ψ, (r (A - <A>) + i (H - <H>))(r (A - <A>) - i (H - <H>)) ψ> is non-negative.
So this quadratic (in r) cannot have two different real roots, and so, (cutting a long but standard story short)
2 (ΔA) (ΔH) ≥ |<[H,A]>| .
ΔH is the standard deviation of the energy E.
< [H,A] > = <ψ, [H,A] ψ>
is i times the time derivative at t = 0 of <ψ, A ψ>, as you can see if you note that the solution to the Schrödinger equation can be written in the form
U(t) ψ = exp(-itH/) ψ
<[H, A]> = i d<A>/dt
Putting everything together, we have the time-energy uncertainty relation in the form
(ΔA / (|d <A>/dt)|) (ΔH) ≥ /2.
Here the "uncertainty" in time is expressed as the average time taken, starting in state ψ, for the expectation of some arbitrary operator A to change by its standard deviation.
This is reasonable as a definition for time uncertainty, because it gives the shortest time scale on which we will be able to notice changes by using A in state ψ.
Hey, that's way cool! For some reason I'd never thought of it that way. But here's something related, which is well-known:
Suppose you could find an observable T which is canonically conjugate to the Hamiltonian H:
[H,T] = i
Then by one of the formulas you wrote, we'd have
d<T>/dt = 1
so the observable T would function as a "clock" - it would increase at the rate of one second per second. In other words, we could use it as a "time" observable... which is why I called it T.
From your uncertainty relation we then have
(ΔT) (ΔH) ≥ /2
the famous time-energy uncertainty relation that everyone keeps yearning for!
The problem is, for physically realistic Hamiltonians H one can prove there is no operator T with
[H,T] = i
In other words, there is no time observable!
The reason is this: by the Stone-von Neumann uniqueness theorem, any pair of operators satisfying the canonical commutation relations [H,T] = i can only be a slightly disguised version of the familiar operators p and q. These operators p and q are unbounded below - i.e., their spectra extend all the way down to negative infinity. But a physically realistic Hamiltonian must be bounded below!
(Here I am glossing over some mathematical nuances: if you read the precise statement of the Stone-von Neumann theorem, you'll see how to fill in these details.)
Crudely speaking, this theorem says that it's impossible to construct a clock that works perfectly no matter what its state is. That's not surprising - but it's sort of surprising that you can prove it, and it's sort of interesting to see what assumptions you need to prove it.
But what you're saying is: "So what? Let's use any operator A as a clock - we can't make d/dt = 1 in all states, but we can make it close to 1, or even equal to 1, in the state we're interested in! Then we can state the energy-time uncertainty relation even without having a time observable - we just say
Thanks - you taught me something cool about time, which is one of my favorite subjects, right up there with space.
Much later, Dmitry A. Arbatsky wrote:
You should mention the paper where the mathematically rigorous formulation of the time-energy uncertainty relation was first given. (It was given there even in nice finite form, not only infinitesimal. In 2005 it was generalized. It turned out that relations for energy and time in Mandelshtam-Tamm formulation, on the one hand, and for coordinate and momentum, on the other hand, are particular consequences of a more general approach.)
With best wishes,
Dmitry A. Arbatsky
© 2010 John Baez |
5f8ec527a5f04c67 | Sign up ×
The Feynman lectures are universally admired, it seems, but also a half-century old. Taking them as a source for self-study, what compensation for their age, if any, should today's reader undertake? I'm interested both in pointers to particular topics where the physics itself is out-of-date, or topics where the pedagogical approach now admits attestable improvements.
share|cite|improve this question
Those are still among my favorite books. – Mike Dunlavey Jun 2 '12 at 0:24
Free online copy: – Qmechanic Jan 16 '14 at 3:14
2 Answers 2
up vote 81 down vote accepted
The Feynman Lectures need only a little amending, but it's a relatively small amount compared to any other textbook. The great advantage of the Feynman Lectures is that everything is worked out from scratch Feynman's way, so that it is taught with the maximum insight, something that you can only do after you sit down and redo the old calculations from scratch. This makes them very interesting, because you learn from Feynman how the discovering gets done, the type of reasoning, the physical intuition, and so on.
The original presentation also makes it that Feynman says all sorts of things in a slightly different way than other books. This is good to test your understanding, because if you only know something in a half-assed way, Feynman sounds wrong. I remember that when I first read it a million years ago, a large fraction of the things he said sounded completely wrong. This original presentation is a very important component: it teaches you what originality sounds like, and knowing how to be original is the most important thing.
I think Vol. I is pretty much OK as an intro, although it should be supplemented at least with this stuff:
1. Computational integration: Feynman does something marvellous at the start of Volume I (something unheard of in 1964), he describes how to Euler time-step a differential equation forward in time. Nowadays, it is a simple thing to numerically integrate any mechanical problem, and experience with numerical integration is essential for students. The integration removes the student's paralysis: when you are staring at an equation and don't know what to do. If you have a computer, you know exactly what to do! Integrating reveals many interesting qualitative things, and shows you just how soon the analytical knowledge painstakingly acquired over 4 centuries craps out. For example, even if you didn't know it, you can see the KAM stability appears spontaneously in self-gravitating clusters at a surprisingly large number of particles. You might expect chaotic motion until you reach 2 particles, which then orbit in an ellipse. But clusters with random masses and velocities of some hundreds of particles eject out particles like crazy, until they get to one or two dozen particles, and then they settle down into a mess of orbits, but this mess must be integrable, because nothing else is ejected out anymore! You discover many things like this from piddling around with particle simulations, and this is something which is missing from Volume I, since computers were not available at the time it was written. It's not completely missing, however, and it's much worse elsewhere.
2. The Kepler problem: Feynman has an interesting point of view regarding this which is published in the "Lost Lecture" book and audio-book. But I think the standard methods are better here, because the 17th century things Feynman redoes are too specific to this one problem. This can be supplemented in any book on analytical mechanics.
3. Thermodynamics: The section on thermodynamics does everything through statistical mechanics and intuition. This begins with the density of the atmosphere, which motivates the Boltzmann distribution, which is then used to derive all sorts of things, culminating in the Clausius-Clayperon equation. This is a great boon when thinking about atoms, but it doesn't teach you the classical thermodynamics, which is really simple starting from modern stat-mech. The position is that the Boltzmann distribution is all you need to know, and that's a little backwards from my perspective. The maximum entropy arguments are better--- they motivate the Boltzmann distribution. The heat-engine he uses is based on rubber-bands too, and yet there is no discussion of why rubber bands are entropic, or of free-energies in the rubber band, or the dependence of stiffness on temperature.
4. Monte-Carlo simulation: This is essential, but it obviously requires computers. With Monte-Carlo you can make snapshots of classical statistical systems quickly on a computer and build up intuition. You can make simulations of liquids, and see how the atoms knock around classically. You can simulate rubber-band polymers, and see the stiffness dependence on temperature. All these things are clearly there in Feynman's head, but without a computer, it's hard to transmit it into any of the students' heads.
For Volume II, the most serious problem is that the foundations are off. Feynman said he wanted to redo the classical textbook point of view on E&M, but he wasn't sure how to do it. The Feynman Lectures were written at a time just before modern gauge theory took off, and while they emphasize the vector potential a lot compared to other treatments of the time, they don't make the vector potential the main object. Feynman wanted to redo Volume II to make it completely vector-potential-centered, but he didn't get to do it. Somebody else did a vector-potential based discussion of E&M based on this recommendation, but the results were not so great.
The major things I don't like in Vol. II:
1. The derivation of the index of refraction is done by a complicated rescattering calculation which is based on plum-pudding-style electron oscillators. This is essentially just the forward-phase index-of-refraction argument Feynman gives to motivate unitarity in the 1963 ghost paper in Acta Physica Polonika. It is not so interesting or useful in my opinion in Vol. II, but it is the most involved calculation in the series.
2. No special functionology: While the subject is covered with a layer of 19th-century mildew, it is useful to know some special functions, especially Bessel functions and spherical harmonics. Feynman always chooses ultra special forms which give elementary functions, and he knows all the cases which are elementary, so he gets a lot of mileage out of this, but it's not general enough.
3. The fluid section is a little thin--- you will learn how the basic equations work, but no major results. The treatment of fluid flow could have been supplemented with He4 flows, where the potential flow description is correct (it is clear that this is Feynman's motivation for the strange treatment of the subject, but this isn't explicit).
4. Numerical methods in field simulation: Here if one wants to write an introductory textbook, one needs to be completely original, because the numerical methods people use today are not so good for field equations of any sort.
Vol. III is extremely good because it is so brief. The introduction to quantum mechanics there gets you to a good intuitive understanding quickly, and this is the goal. It probably could use the following:
1. A discussion of diffusion, and the relation between Schrödinger operators and diffusion operators: This is obvious from the path integral, but it was also clear to Schrödinger. It also allows you to quickly motivate the exact solutions to Schrodinger's equation, like the $1/r$ potential, something which Feynman just gives you without motivation. A proper motivation can be given by using SUSY QM (without calling it that, just a continued stochastic equation) and trying out different ground state ansatzes.
2. Galilean invariance of the Schrödinger equation: This part is not done in any book, I think only because Dirac omitted it from his. It is essential to know how to boost wavefunctions. Since Feynman derives the Schrödinger equation from a tight-binding model (a lattice approximation), the galilean invariance is not obvious at all.
Since the lectures are introductory, everything in there just becomes second nature, so it doesn't matter that they are old. The old books should just be easier, because the old stuff is already floating in the air. If you find something in the Feynman Lectures which isn't completely obvious, you should study it until it is obvious--- there's no barrier, the things are self-contained.
share|cite|improve this answer
@RonMaimon I am reading the book myself and find it to be the best piece suitable at my level. But do his arguments still hold in light of modern discoveries? Especially the details of "atmospheric electricity" part? – Satwik Pasani Jul 13 '13 at 17:28
@SatwikPasani: There is nothing that I remember from that chapter that is incorrect, it was mostly qualitative. Perhaps the mechanism of charge separation is understood better today, I don't know. The only point of this thing is to explain why there is a voltage as you go up in the air, and this is due to lightning charging up the ground, and it's a fact, it's still true. I don't follow the atmospheric literature, unfortunately, I don't know if people know more about charge separation through air-droplet rubbing, this was the open mystery he talked about in that chapter. – Ron Maimon Jul 13 '13 at 20:41
The potential-based approach to EM you mention may be "Collective Electrodynamics" by Feynman's student Carver Mead: . PS: I recall you writing favorably of Steven Frautschi's S-Matrix book. I saw him recently; in retirement, now 80, he has re-invented himself as a teaching assistant, and won Caltech's Feynman Prize for Excellence in Teaching this year. When I mentioned his S-Matrix work being cited here, he ducked his head and said, "Well, that was a long time ago..." – Art Brown Oct 24 '14 at 0:37
@ArtBrown: A long time, a long time, but classic underappreciate work. Thanks for the info, it might have been Mead, I honestly don't remember, I found it in a bookstore and flipped through it, didn't like it that much, but noticed it was based on what Feynman intended. – Ron Maimon Oct 25 '14 at 20:19
@ArtBrown I've seen the book on and after looking at the contents, I'm still scratching my head over the whole point of it: Physicists use QED, engineers use CED and both models are brilliantly served by main stream text books. – Physiks lover Oct 27 '14 at 16:20
I'm not sure what you mean by saying: "the physics is out-of-date," because in some sense Newtonian mechanics is out-of-date. But we know that it is an effective theory (the low-speed limit of relativity) and is important to study and understand since it describes everyday-life mechanics accurately.
Feynman lectures are the classic 101/102 physics resource. So, just read and learn. And Feynman is a master in his pedagogical approach (remember the challenger case?)
The only thing is that you're not going to learn the techniques of Quantum field theory or other advanced and more recent research-level topics from Feynman lectures.
share|cite|improve this answer
>I'm not sure what you mean by saying: "the physics is out-of-date." For the sake of simplicity, let's say I mean it in the sociological sense that an idealized, smart, conscientious and well-informed professor teaching out the Feynman lectures would feel compelled to perform an intervention, which might consist of describing an experiment that postdates the book, a new simplification of some conceptual explanation, a crucial analogy unnoticed 50 years ago, whatever. – David Feldman Jun 2 '12 at 1:20
I can't imagine any text from 1910 that wouldn't have needed some emendation in 1960...and I can't imagine that the subject has progressed any more slowly during the past 50 years than during the 50 years before that. I understand that most of the recent progress must be omitted as too advanced for beginners, but all of it? – David Feldman Jun 2 '12 at 1:23
I see. Maybe, in addition to Feynman lectures, using a modern physics textbook could be a good idea. I think some new textbooks have special websites for extended and interactive learning. Also, MIT and other great universities provide free video lectures given by well-known physicists. – stupidity Jun 2 '12 at 1:32
@DavidFeldman - I think it's fair to say that physics advanced a little more in 1910-1960 than it has done since. In fact there hasn't really been much physics since QCD in the mid 60s – Martin Beckett Jun 2 '12 at 4:00
@MartinBeckett: I don't think it's true at all--- the great advances of the 1970s-1990s, strings, holography, topological theories, conformal theories, renormalization treatment of phase transitions, disorder physics, quantum computing, numerics are all fundamental, but what has happened is that people refused to push the curriculum down anymore, so that elementary EM and QM is in high school, and undergraduates can do reasonable stuff right away. The internet allows you to do this, since a self-motivated person can learn the material without relying on pedagogy, which is always substandard. – Ron Maimon Jun 2 '12 at 5:55
Your Answer
|
d72f9ff1252cc357 | Page semi-protected
Properties of water
From Wikipedia, the free encyclopedia
(Redirected from Water (molecule))
Jump to: navigation, search
"Hydrogen monoxide" redirects here. For the hoax involving the chemical name of water, see Dihydrogen monoxide hoax.
"H2O" and "HOH" redirect here. For other uses, see H2O (disambiguation) and HOH (disambiguation).
Water (H
The water molecule has this basic geometric structure
Ball-and-stick model of a water molecule
Space filling model of a water molecule
A drop of water falling towards water in a glass
IUPAC name
water, oxidane
Other names
Hydrogen oxide, Dihydrogen monoxide (DHMO), Hydrogen monoxide, Dihydrogen oxide, Hydrogen hydroxide (HH or HOH), Hydric acid, Hydrohydroxic acid, Hydroxic acid, Hydrol,[1] μ-Oxido dihydrogen
7732-18-5 YesY
3D model (Jmol) Interactive image
ChEBI CHEBI:15377 YesY
ChEMBL ChEMBL1098659 YesY
ChemSpider 937 YesY
PubChem 962
RTECS number ZC0110000
Molar mass 18.01528(33) g/mol
Appearance White solid or almost colorless, transparent, with a slight hint of blue, crystalline solid or liquid[2]
Odor None
Density Liquid: 999.9720 kg/m3 ≈ 1 tonne/m3 = 1 kg/l = 1 g/cm3 ≈ 62.4 lb/ft3 (maximum, at ~4 °C)
Solid: 917 kg/m3 = 0.917 tonne/m3 = 0.917 kg/l = 0.917 g/cm3 ≈ 57.2 lb/ft3
Melting point 0.00 °C (32.00 °F; 273.15 K) [a]
Boiling point 99.98 °C; 211.96 °F; 373.13 K [3][a]
Solubility Poorly soluble in haloalkanes, aliphatic and aromatic hydrocarbons, ethers.[4] Improved solubility in carboxylates, alcohols, ketones, amines. Miscible with methanol, ethanol, isopropanol, acetone, glycerol.
Vapor pressure 3.1690 kilopascals or 0.031276 atm[5]
Acidity (pKa) 13.995[6][b]
Basicity (pKb) 13.995
Thermal conductivity 0.6065 W/m·K[8]
1.3330 (20°C)[9]
Viscosity 0.890 cP[10]
1.8546 D[11]
75.375 ± 0.05 J/mol·K[12]
69.95 ± 0.03 J/mol·K[12]
-285.83 ± 0.040 kJ/mol[4][12]
-237.24 kJ/mol[4]
Main hazards Drowning
Water intoxication
Avalanche (as snow)
(see also Dihydrogen monoxide hoax)
NFPA 704
Flash point Non-flammable
Related compounds
Other cations
Hydrogen sulfide
Hydrogen selenide
Hydrogen telluride
Hydrogen polonide
Hydrogen peroxide
Related solvents
Related compounds
Water vapor
Heavy water
YesY verify (what is YesYN ?)
Infobox references
Water (H
) is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, nearly colorless with a hint of blue. This simplest hydrogen chalcogenide is by far the most studied chemical compound and is described as the "universal solvent" for its ability to dissolve many substances.[13][14] This allows it to be the "solvent of life".[15] It is the only common substance to exist as a solid, liquid, and gas in nature.[16]
Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to separate ions in salts and strongly bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass, and a high heat capacity.
Water is amphoteric, meaning it is both an acid and a base—it produces H+
and OH
ions by self ionization. This regulates the concentrations of H+
and OH
ions in water.
Due to water being a very good solvent, it is rarely pure and some of the properties of impure water can vary from those of the pure substance. However, there are also many compounds that are essentially, if not completely, insoluble in water, such as fats, oils and other non-polar substances.
The accepted IUPAC name of water is oxidane or simply water,[17] or its equivalent in different languages, although there are other systematic names which can be used to describe the molecule. Oxidane is only intended to be used as the name of the mononuclear parent hydride used for naming derivatives of water by substituent nomenclature.[18] These derivatives commonly have other recommended names. For example, the name hydroxyl is recommended over oxidanyl for the –OH group. The name oxane is explicitly mentioned by the IUPAC as being unsuitable for this purpose, since it is already the name of a cyclic ether also known as tetrahydropyran.[19][20]
The simplest systematic name of water is hydrogen oxide. This is analogous to related compounds such as hydrogen peroxide, hydrogen sulfide, and deuterium oxide (heavy water).
The polarized form of the water molecule, H+
, is also called hydron hydroxide by IUPAC nomenclature.[21]
In keeping with the basic rules of chemical nomenclature, water would have a systematic name of dihydrogen monoxide,[22] but this is not among the names published by the International Union of Pure and Applied Chemistry.[17] It is a rarely used name of water, and mostly used in various hoaxes or spoofs that call for this "lethal chemical" to be banned, such as in the dihydrogen monoxide hoax.
Other systematic names for water include hydroxic acid, hydroxylic acid, and hydrogen hydroxide, using acid and base names.[c] None of these exotic names are used widely.
Water is the chemical substance with chemical formula H
; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom.[23] Water is a tasteless, odorless liquid at ambient temperature and pressure, and appears colorless in small quantities, although it has its own intrinsic very light blue hue.[24][2] Ice also appears colorless, and water vapor is essentially invisible as a gas.
Water is primarily a liquid under standard conditions, which is not predicted from its relationship to other analogous hydrides of the oxygen family in the periodic table, which are gases such as hydrogen sulfide. The elements surrounding oxygen in the periodic table, nitrogen, fluorine, phosphorus, sulfur and chlorine, all combine with hydrogen to produce gases under standard conditions. The reason that water forms a liquid is that oxygen is more electronegative than all of these elements with the exception of fluorine. Oxygen attracts electrons much more strongly than hydrogen, resulting in a net positive charge on the hydrogen atoms, and a net negative charge on the oxygen atom. These atomic charges give each water molecule a net dipole moment. Electrical attraction between water molecules due to this dipole pulls individual molecules closer together, making it more difficult to separate the molecules and therefore raising the boiling point. This attraction is known as hydrogen bonding.
The molecules of water are constantly moving in relation to each other, and the hydrogen bonds are continually breaking and reforming at timescales faster than 200 femtoseconds (2×10−13 seconds).[25] However, these bonds are strong enough to create many of the peculiar properties of water, some of which make it integral to life.
Water can be described as a polar liquid that slightly dissociates disproportionately or self ionizes into an hydronium ion and hydroxide ion.
2 H
+ OH
The dissociation constant for this dissociation is commonly symbolized as Kw and has a value of about 1014 at 25 °C; see here for values at other temperatures.
Water, ice, and vapor
Like many substances, water can take numerous forms, which are broadly categorized by phase of matter. The liquid phase is the most common among water's phases (within the Earth's atmosphere and surface) and is the form that is generally denoted by the word "water". The solid phase of water is known as ice and commonly takes the structure of hard, amalgamated crystals, such as ice cubes, or loosely accumulated granular crystals, like snow. For a list of the many different crystalline and amorphous forms of solid H
, see the article ice. The gaseous phase of water is known as water vapor (or steam), in which water takes the form of a transparent cloud. (Visible steam and clouds are, in fact, water in the liquid form as minute droplets suspended in the air.)
The fourth state of water, that of a supercritical fluid, is much less common than the other three and only rarely occurs in nature, in extremely hostile conditions. When water achieves a specific critical temperature and a specific critical pressure (647 K and 22.064 MPa), the liquid and gas phases merge to one homogeneous fluid phase, with properties of both gas and liquid. A likely example of naturally occurring supercritical water is in the hottest parts of deep water hydrothermal vents, in which water is heated to the critical temperature by volcanic plumes and the critical pressure is caused by the weight of the ocean at the extreme depths where the vents are located. This pressure is reached at a depth of about 2200 meters: much less than the mean depth of the ocean (3800 meters).[26]
Heat capacity and heats of vaporization and fusion
Heat of vaporization of water from melting to critical temperature
Water has a very high specific heat capacity of 4.1814 J/(g·K) at 25 °C – the second highest among all the heteroatomic species (after ammonia), as well as a high heat of vaporization (40.65 kJ/mol or 2257 kJ/kg at the normal boiling point), both of which are a result of the extensive hydrogen bonding between its molecules. These two unusual properties allow water to moderate Earth's climate by buffering large fluctuations in temperature. According to Josh Willis, of NASA's Jet Propulsion Laboratory, the oceans can absorb one thousand times more heat than the atmosphere without changing their temperature much and are absorbing 80 to 90% of the heat global warming.[27]
The specific enthalpy of fusion (more commonly known as latent heat) of water is 333.55 kJ/kg at 0 °C: the same amount of energy is required to melt ice as to warm ice from −160 °C up to its melting point or to heat the same amount of water by about 80 °C. Of common substances, only that of ammonia is higher. This property confers resistance to melting on the ice of glaciers and drift ice. Before and since the advent of mechanical refrigeration, ice was and still is in common use for retarding food spoilage.
The specific heat capacity of ice at −10 °C is 2.03 J/(g·K)[28] and the heat capacity of steam at 100 °C is 2.08 J/(g·K).[29]
Density of water and ice
Density of ice and water as a function of temperature
The density of water is about 1 gram per cubic centimetre (62 lb/cu ft): this relationship was originally used to define the gram.[30] The density varies with temperature, but not linearly: as the temperature increases, the density rises to a peak at 3.98 °C (39.16 °F) and then decreases.[31] This unusual negative thermal expansion below 4 °C (39 °F) is also observed in molten silica.[32] Regular, hexagonal ice is also less dense than liquid water—upon freezing, the density of water decreases by about 9%.[33]
These effects are due to the reduction of thermal motion with cooling, which allows water molecules to form more hydrogen bonds that prevent the molecules from coming close to each other.[31] While below 4 °C the breakage of hydrogen bonds due to heating allows water molecules to pack closer despite the increase in the thermal motion (which tends to expand a liquid), above 4 °C water expands as the temperature increases.[31] Water near the boiling point is about 4% less dense than water at 4 °C (39 °F).[33][d]
Other substances that expand on freezing are acetic acid, silicon, gallium,[34] germanium, bismuth, plutonium and also chemical compounds that form spacious crystal lattices with tetrahedral coordination.
Under increasing pressure, ice undergoes a number of transitions to other allotropic forms with higher density than liquid water, such as ice II, ice III, high-density amorphous ice (HDA), and very-high-density amorphous ice (VHDA).[35][36]
Temperature distribution in a lake in summer and winter
The unusual density curve and lower density of ice than of water is vital to life—if water was most dense at the freezing point, then in winter the very cold water at the surface of lakes and other water bodies would sink, the lake could freeze from the bottom up, and all life in them would be killed.[33] Furthermore, given that water is a good thermal insulator (due to its heat capacity), some frozen lakes might not completely thaw in summer.[33] The layer of ice that floats on top insulates the water below.[37] Water at about 4 °C (39 °F) also sinks to the bottom, thus keeping the temperature of the water at the bottom constant (see diagram).[33]
Density of saltwater and ice
WOA surface density
The density of salt water depends on the dissolved salt content as well as the temperature. Ice still floats in the oceans, otherwise they would freeze from the bottom up. However, the salt content of oceans lowers the freezing point by about 1.9 °C[38] (see here for explanation) and lowers the temperature of the density maximum of water to the freezing point. This is why, in ocean water, the downward convection of colder water is not blocked by an expansion of water as it becomes colder near the freezing point. The oceans' cold water near the freezing point continues to sink. So creatures that live at the bottom of cold oceans like the Arctic Ocean generally live in water 4 °C colder than at the bottom of frozen-over fresh water lakes and rivers.
As the surface of salt water begins to freeze (at −1.9 °C[38] for normal salinity seawater, 3.5%) the ice that forms is essentially salt-free, with about the same density as freshwater ice. This ice floats on the surface, and the salt that is "frozen out" adds to the salinity and density of the sea water just below it, in a process known as brine rejection. This denser salt water sinks by convection and the replacing seawater is subject to the same process. This produces essentially freshwater ice at −1.9 °C[38] on the surface. The increased density of the sea water beneath the forming ice causes it to sink towards the bottom. On a large scale, the process of brine rejection and sinking cold salty water results in ocean currents forming to transport such water away from the Poles, leading to a global system of currents called the thermohaline circulation.
Miscibility and condensation
Red line shows saturation
Main article: Humidity
Water is miscible with many liquids, for example ethanol in all proportions, forming a single homogeneous liquid. On the other hand, water and most oils are immiscible usually forming layers according to increasing density from the top. This can be predicted by comparing the polarity. Water being a relatively polar compound will tend to be miscible with liquids of high polarity such as ethanol and acetone, whereas compounds with low polarity will tend to be immiscible and poorly soluble such as with hydrocarbons.
As a gas, water vapor is completely miscible with air. On the other hand, the maximum water vapor pressure that is thermodynamically stable with the liquid (or solid) at a given temperature is relatively low compared with total atmospheric pressure. For example, if the vapor's partial pressure is 2% of atmospheric pressure and the air is cooled from 25 °C, starting at about 22 °C water will start to condense, defining the dew point, and creating fog or dew. The reverse process accounts for the fog burning off in the morning. If the humidity is increased at room temperature, for example, by running a hot shower or a bath, and the temperature stays about the same, the vapor soon reaches the pressure for phase change, and then condenses out as minute water droplets, commonly referred to as steam.
A gas in this context is referred to as saturated or 100% relative humidity, when the vapor pressure of water in the air is at the equilibrium with vapor pressure due to (liquid) water; water (or ice, if cool enough) will fail to lose mass through evaporation when exposed to saturated air. Because the amount of water vapor in air is small, relative humidity, the ratio of the partial pressure due to the water vapor to the saturated partial vapor pressure, is much more useful. Water vapor pressure above 100% relative humidity is called super-saturated and can occur if air is rapidly cooled, for example, by rising suddenly in an updraft.[e]
Vapor pressure
Vapor pressure diagrams of water
The compressibility of water is a function of pressure and temperature. At 0 °C, at the limit of zero pressure, the compressibility is 5.1×10−10 Pa−1. At the zero-pressure limit, the compressibility reaches a minimum of 4.4×10−10 Pa−1 around 45 °C before increasing again with increasing temperature. As the pressure is increased, the compressibility decreases, being 3.9×10−10 Pa−1 at 0 °C and 100 megapascals (1,000 bar).[39]
The bulk modulus of water is about 2.2 GPa.[40] The low compressibility of non-gases, and of water in particular, leads to their often being assumed as incompressible. The low compressibility of water means that even in the deep oceans at 4 km depth, where pressures are 40 MPa, there is only a 1.8% decrease in volume.[40]
Triple point
The various triple points of water
Phases in stable equilibrium Pressure Temperature
liquid water, ice Ih, and water vapor 611.657 Pa[41] 273.16 K (0.01 °C)
liquid water, ice Ih, and ice III 209.9 MPa 251 K (−22 °C)
liquid water, ice III, and ice V 350.1 MPa −17.0 °C
liquid water, ice V, and ice VI 632.4 MPa 0.16 °C
ice Ih, Ice II, and ice III 213 MPa −35 °C
ice II, ice III, and ice V 344 MPa −24 °C
ice II, ice V, and ice VI 626 MPa −70 °C
The temperature and pressure at which solid, liquid, and gaseous water coexist in equilibrium is called the triple point of water. This point is used to define the units of temperature (the kelvin, the SI unit of thermodynamic temperature and, indirectly, the degree Celsius and even the degree Fahrenheit).
As a consequence, water's triple point temperature, as measured in these units, is a prescribed value rather than a measured quantity.
Phase diagram of water
This pressure is quite low, about 1166 of the normal sea level barometric pressure of 101,325 Pa. The atmospheric surface pressure on planet Mars is 610.5 Pa, which is remarkably close to the triple point pressure. The altitude of this surface pressure was used to define zero-elevation or "sea level" on that planet.[42]
Although it is commonly named as "the triple point of water", the stable combination of liquid water, ice I, and water vapor is but one of several triple points on the phase diagram of water. Gustav Heinrich Johann Apollon Tammann in Göttingen produced data on several other triple points in the early 20th century. Kamb and others documented further triple points in the 1960s.[43][44][45]
Melting point
The melting point of ice is 0 °C (32 °F; 273 K) at standard pressure; however, pure liquid water can be supercooled well below that temperature without freezing if the liquid is not mechanically disturbed. It can remain in a fluid state down to its homogeneous nucleation point of about 231 K (−42 °C; −44 °F).[46] The melting point of ordinary hexagonal ice falls slightly under moderately high pressures, by 0.0073 °C (0.0131 °F)/atm[f] or about 0.5 °C (0.90 °F)/70 atm[g][47] as the stabilization energy of hydrogen bonding is exceeded by intermolecular repulsion, but as ice transforms into its allotropes (see crystalline states of ice) above 209.9 MPa (2,072 atm), the melting point increases markedly with pressure, i.e., reaching 355 K (82 °C) at 2.216 GPa (21,870 atm) (triple point of Ice VII[48]).
Electrical properties
Electrical conductivity
Pure water containing no exogenous ions is an excellent insulator, but not even "deionized" water is completely free of ions. Water undergoes auto-ionization in the liquid state, when two water molecules form one hydroxide anion (OH
) and one hydronium cation (H
Because water is such a good solvent, it almost always has some solute dissolved in it, often a salt. If water has even a tiny amount of such an impurity, then it can conduct electricity far more readily.
It is known that the theoretical maximum electrical resistivity for water is approximately 18.2 MΩ·cm (182 ·m) at 25 °C.[49] This figure agrees well with what is typically seen on reverse osmosis, ultra-filtered and deionized ultra-pure water systems used, for instance, in semiconductor manufacturing plants. A salt or acid contaminant level exceeding even 100 parts per trillion (ppt) in otherwise ultra-pure water begins to noticeably lower its resistivity by up to several kΩ·m.[citation needed]
In pure water, sensitive equipment can detect a very slight electrical conductivity of 0.05501 ± 0.0001 µS/cm at 25.00 °C.[49] Water can also be electrolyzed into oxygen and hydrogen gases but in the absence of dissolved ions this is a very slow process, as very little current is conducted. In ice, the primary charge carriers are protons (see proton conductor).[50] Ice was previously thought to have a small but measurable conductivity of 1×1010 S cm−1, but this conductivity is now thought to be almost entirely from surface defects, and without those, ice is an insulator with an immeasurably small conductivity.[31]
Polarity, hydrogen bonding and intermolecular structure
A diagram showing the partial charges on the atoms in a water molecule
An important feature of water is its polar nature. The structure has a bent molecular geometry for the two hydrogens from the oxygen vertex. The oxygen atom also has two lone pairs of electrons. One effect usually ascribed to the lone pairs is that the H–O–H gas phase bend angle is 104.48°,[51] which is smaller than the typical tetrahedral angle of 109.47°. The lone pairs are closer to the oxygen atom than the electrons sigma bonded to the hydrogens, so they require more space. The increased repulsion of the lone pairs forces the O–H bonds closer to each other.[52]
Another effect of the electronic structure is that water is a polar molecule. Due to the difference in electronegativity, there is a bond dipole moment pointing from each H to the O, making the oxygen partially negative and each hydrogen partially positive. In addition, the lone pairs of electrons on the O are in the direction opposite to the hydrogen atoms. This results in a large molecular dipole, pointing from a positive region between the two hydrogen atoms to the negative region of the oxygen atom. The charge differences cause water molecules to be attracted to each other (the relatively positive areas being attracted to the relatively negative areas) and to other polar molecules. This attraction contributes to hydrogen bonding, and explains many of the properties of water, such as solvent action.[53]
Although hydrogen bonding is a relatively weak attraction compared to the covalent bonds within the water molecule itself, it is responsible for a number of water's physical properties. These properties include its relatively high melting and boiling point temperatures: more energy is required to break the hydrogen bonds between water molecules. In contrast, hydrogen sulfide (H
), has much weaker hydrogen bonding due to sulfur's lower electronegativity. H
is a gas at room temperature, in spite of hydrogen sulfide having nearly twice the molar mass of water. The extra bonding between water molecules also gives liquid water a large specific heat capacity. This high heat capacity makes water a good heat storage medium (coolant) and heat shield.
Proposed structures
Model of hydrogen bonds (1) between molecules of water
A single water molecule can participate in a maximum of four hydrogen bonds because it can accept two bonds using the lone pairs on oxygen and donate two hydrogen atoms. Other molecules like hydrogen fluoride, ammonia and methanol can also form hydrogen bonds. However, they do not show anomalous thermodynamic, kinetic or structural properties like those observed in water because none of them can form four hydrogen bonds: either they cannot donate or accept hydrogen atoms, or there are steric effects in bulky residues. In water, intermolecular tetrahedral structures form due to the four hydrogen bonds, thereby forming an open structure and a three-dimensional bonding network, resulting in the anomalous decrease in density when cooled below 4 °C. This repeated, constantly reorganizing unit defines a three-dimensional network extending throughout the liquid. This view is based upon neutron scattering studies and computer simulations, and it makes sense in the light of the unambiguously tetrahedral arrangement of water molecules in ice structures.
However, there is an alternative theory for the structure of water. In 2004, a controversial paper from Stockholm University suggested that water molecules in liquid form typically bind not to four but to only two others; thus forming chains and rings. The term "string theory of water" (which is not to be confused with the string theory of physics) was coined. These observations were based upon X-ray absorption spectroscopy that probed the local environment of individual oxygen atoms. Water, the team suggests, is a muddle of the two proposed structures. They say that it is a soup flecked with "icebergs" each comprising 100 or so loosely connected molecules that are relatively open and hydrogen bonded. The soup is made of the string structure and the icebergs of the tetrahedral structure.[54]
Cohesion and adhesion
Dew drops adhering to a spider web
Water molecules stay close to each other (cohesion), due to the collective action of hydrogen bonds between water molecules. These hydrogen bonds are constantly breaking, with new bonds being formed with different water molecules; but at any given time in a sample of liquid water, a large portion of the molecules are held together by such bonds.[55]
Water also has high adhesion properties because of its polar nature. On extremely clean/smooth glass the water may form a thin film because the molecular forces between glass and water molecules (adhesive forces) are stronger than the cohesive forces. In biological cells and organelles, water is in contact with membrane and protein surfaces that are hydrophilic; that is, surfaces that have a strong attraction to water. Irving Langmuir observed a strong repulsive force between hydrophilic surfaces. To dehydrate hydrophilic surfaces—to remove the strongly held layers of water of hydration—requires doing substantial work against these forces, called hydration forces. These forces are very large but decrease rapidly over a nanometer or less.[56] They are important in biology, particularly when cells are dehydrated by exposure to dry atmospheres or to extracellular freezing.[57]
Surface tension
This paper clip is under the water level, which has risen gently and smoothly. Surface tension prevents the clip from submerging and the water from overflowing the glass edges.
Temperature dependence of the surface tension of pure water
Water has a high surface tension of 71.99 mN/m at 25 °C,[58] caused by the strong cohesion between water molecules, the highest of the common non-ionic, non-metallic liquids. This can be seen when small quantities of water are placed onto a sorption-free (non-adsorbent and non-absorbent) surface, such as polyethylene or Teflon, and the water stays together as drops. Just as significantly, air trapped in surface disturbances forms bubbles, which sometimes last long enough to transfer gas molecules to the water.[citation needed]
Another surface tension effect is capillary waves, which are the surface ripples that form around the impacts of drops on water surfaces, and sometimes occur with strong subsurface currents flowing to the water surface. The apparent elasticity caused by surface tension drives the waves. Additionally, the surface tension of water allows certain insects to walk on the surface of water. This is caused by the strength of the hydrogen bonds, making it difficult to break the surface of water. These insects, including the raft spider, are more dense than water and yet are still able to walk on the surface. [59]
Capillary action
Due to an interplay of the forces of adhesion and surface tension, water exhibits capillary action whereby water rises into a narrow tube against the force of gravity. Water adheres to the inside wall of the tube and surface tension tends to straighten the surface causing a surface rise and more water is pulled up through cohesion. The process continues as the water flows up the tube until there is enough water such that gravity balances the adhesive force.
Surface tension and capillary action are important in biology. For example, when water is carried through xylem up stems in plants, the strong intermolecular attractions (cohesion) hold the water column together and adhesive properties maintain the water attachment to the xylem and prevent tension rupture caused by transpiration pull.
Water as a solvent
Main article: Aqueous solution
Presence of colloidal calcium carbonate from high concentrations of dissolved lime turns the water of Havasu Falls turquoise.
Water is also a good solvent, due to its polarity. Substances that will mix well and dissolve in water (e.g. salts) are known as hydrophilic ("water-loving") substances, while those that do not mix well with water (e.g. fats and oils), are known as hydrophobic ("water-fearing") substances. The ability of a substance to dissolve in water is determined by whether or not the substance can match or better the strong attractive forces that water molecules generate between other water molecules. If a substance has properties that do not allow it to overcome these strong intermolecular forces, the molecules are "pushed out" from the water, and do not dissolve. Contrary to the common misconception, water and hydrophobic substances do not "repel", and the hydration of a hydrophobic surface is energetically, but not entropically, favorable.
When an ionic or polar compound enters water, it is surrounded by water molecules (Hydration). The relatively small size of water molecules (~ 3 Angstroms) allows many water molecules to surround one molecule of solute. The partially negative dipole ends of the water are attracted to positively charged components of the solute, and vice versa for the positive dipole ends.
In general, ionic and polar substances such as acids, alcohols, and salts are relatively soluble in water, and non-polar substances such as fats and oils are not. Non-polar molecules stay together in water because it is energetically more favorable for the water molecules to hydrogen bond to each other than to engage in van der Waals interactions with non-polar molecules.
An example of an ionic solute is table salt; the sodium chloride, NaCl, separates into Na+
cations and Cl
anions, each being surrounded by water molecules. The ions are then easily transported away from their crystalline lattice into solution. An example of a nonionic solute is table sugar. The water dipoles make hydrogen bonds with the polar regions of the sugar molecule (OH groups) and allow it to be carried away into solution.
Quantum tunneling
The quantum tunneling dynamics in water was reported as early as 1992. At that time it was known that there are motions which destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomers.[60] On 18 March 2016, it was reported that the hydrogen bond can be broken by quantum tunneling in the water hexamer. Unlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bonds.[61] Later in the same year, the discovery of the quantum tunneling of water molecules was reported.[62]
Chemical properties in nature
Action of water on rock over long periods of time typically leads to weathering and water erosion, physical processes that convert solid rocks and minerals into soil and sediment, but under some conditions chemical reactions with water occur as well, resulting in metasomatism or mineral hydration, a type of chemical alteration of a rock which produces clay minerals. It also occurs when Portland cement hardens.
Water ice can form clathrate compounds, known as clathrate hydrates, with a variety of small molecules that can be embedded in its spacious crystal lattice. The most notable of these is methane clathrate, 4 CH
, naturally found in large quantities on the ocean floor.
Pure water has the concentration of hydroxide ions (OH
) equal to that of the hydronium (H
) or hydrogen (H+
) ions, which gives pH of 7 at 298 K. In practice, pure water is very difficult to produce. Water left exposed to air for any length of time will dissolve carbon dioxide, forming a dilute solution of carbonic acid, with a limiting pH of about 5.7. As cloud droplets form in the atmosphere and as raindrops fall through the air minor amounts of CO
are absorbed, and thus most rain is slightly acidic. If high amounts of nitrogen and sulfur oxides are present in the air, they too will dissolve into the cloud and rain drops, producing acid rain.
Electromagnetic absorption
Water is relatively transparent to visible light, near ultraviolet light, and far-red light, but it absorbs most ultraviolet light, infrared light, and microwaves. Most photoreceptors and photosynthetic pigments utilize the portion of the light spectrum that is transmitted well through water. Microwave ovens take advantage of water's opacity to microwave radiation to heat the water inside of foods. The very weak onset of absorption in the red end of the visible spectrum lends water its intrinsic blue hue (see Color of water).
Heavy water and isotopologues
Several isotopes of both hydrogen and oxygen exist, giving rise to several known isotopologues of water.
Hydrogen occurs naturally in three isotopes. The most common isotope, 1
, sometimes called protium, accounts for more than 99.98% of hydrogen in water and consists of only a single proton in its nucleus. A second stable isotope, deuterium (chemical symbol D or 2
), has an additional neutron. Deuterium oxide, D
, is also known as heavy water because of its higher density. It is used in nuclear reactors as a neutron moderator. The third isotope, tritium (chemical symbol T or 3
) has 1 proton and 2 neutrons, and is radioactive, decaying with a half-life of 4500 days. THO exists in nature only in minute quantities, being produced primarily via cosmic ray-induced nuclear reactions in the atmosphere. Water with one protium and one deuterium atom HDO occurs naturally in ordinary water in low concentrations (~0.03%) and D
in far lower amounts (0.000003%) and any such molecules are temporary as the atoms recombine.
The most notable physical differences between H
and D
, other than the simple difference in specific mass, involve properties that are affected by hydrogen bonding, such as freezing and boiling, and other kinetic effects. This is because the nucleus of deuterium is twice as heavy as protium, and this causes noticeable differences in bonding energies. The difference in boiling points allows the isotopologues to be separated. The self-diffusion coefficient of H
at 25 °C is 23% higher than the value of D
.[63] Because water molecules exchange hydrogen atoms with one another, hydrogen deuterium oxide (DOH) is much more common in low-purity heavy water than pure dideuterium monoxide D
Consumption of pure isolated D
may affect biochemical processes – ingestion of large amounts impairs kidney and central nervous system function. Small quantities can be consumed without any ill-effects; humans are generally unaware of taste differences,[64] but sometimes report a burning sensation[65] or sweet flavor.[66] Very large amounts of heavy water must be consumed for any toxicity to become apparent. Rats, however, are able to avoid heavy water by smell, and it is toxic to many animals.[67]
Oxygen also has three stable isotopes, with 16
present in 99.76%, 17
in 0.04%, and 18
in 0.2% of water molecules.[68]
Light water refers to deuterium-depleted water (DDW), water in which the deuterium content has been reduced below the standard 155 ppm level.
Standard water
Vienna Standard Mean Ocean Water is the current international standard for water isotopes. Naturally occurring water is almost completely composed of the neutron-less hydrogen isotope protium. Only 155 ppm include deuterium (2
or D), a hydrogen isotope with one neutron, and fewer than 20 parts per quintillion include tritium (3
or T), which has two neutrons.
Acid-base reactions
Water is amphoteric: it has the ability to act as either an acid or a base in chemical reactions.[69] According to the Brønsted-Lowry definition, an acid is a proton (H+
) donor and a base is a proton acceptor.[70] When reacting with a stronger acid, water acts as a base; when reacting with a stronger base, it acts as an acid.[70] For instance, water receives an H+
ion from HCl when hydrochloric acid is formed:
+ H
+ Cl
In the reaction with ammonia, NH
, water donates a H+
ion, and is thus acting as an acid:
+ H
+ OH
Because the oxygen atom in water has two lone pairs, water often acts as a Lewis base, or electron pair donor, in reactions with Lewis acids, although it can also react with Lewis bases, forming hydrogen bonds between the electron pair donors and the hydrogen atoms of water. HSAB theory describes water as both a weak hard acid and a weak hard base, meaning that it reacts preferentially with other hard species:
(Lewis acid)
+ H
(Lewis base)
(Lewis acid)
+ H
(Lewis base)
(Lewis base)
+ H
(Lewis acid)
When a salt of a weak acid or of a weak base is dissolved in water, water can partially hydrolyze the salt, producing the corresponding base or acid, which gives aqueous solutions of soap and baking soda their basic pH:
+ H
⇌ NaOH + NaHCO
Ligand chemistry
Water's Lewis base character makes it a common ligand in transition metal complexes, examples of which range from solvated ions, such as Fe(H
, to perrhenic acid, which contains two water molecules coordinated to a rhenium atom, and various solid hydrates, such as CoCl
. Water is typically a monodentate ligand–it forms only one bond with the central atom.[71]
Organic chemistry
As a hard base, water reacts readily with organic carbocations; for example in an hydration reaction, a hydroxyl group (OH
) and an acidic proton are added to the two carbon atoms bonded together in the carbon-carbon double bond, resulting in an alcohol. When addition of water to an organic molecule cleaves the molecule in two, hydrolysis is said to occur. Notable examples of hydrolysis are the saponification of fats and the digestion of proteins and polysaccharides. Water can also be a leaving group in SN2 substitution and E2 elimination reactions; the latter is then known as a dehydration reaction.
Water in redox reactions
Water contains hydrogen in the oxidation state +1 and oxygen in the oxidation state −2.[72] It oxidizes chemicals such as hydrides, alkali metals, and some alkaline earth metals.[73][74][75] One example of an alkali metal reacting with water is:[76]
2 Na + 2 H
+ 2 Na+
+ 2 OH
Some other reactive metals, such as aluminum and beryllium, are oxidized by water as well, but their oxides adhere to the metal and form a passive protective layer.[77] Note, however, that the rusting of iron is a reaction between iron and oxygen[78] that is dissolved in water, not between iron and water.
Water can be oxidized to emit oxygen gas, but very few oxidants react with water even if their reduction potential is greater than the potential of O
. Almost all such reactions require a catalyst.[79] An example of the oxidation of water is:
4 AgF
+ 2 H
→ 4 AgF + 4 HF + O
Main article: Electrolysis of water
Water can be split into its constituent elements, hydrogen and oxygen, by passing an electric current through it. This process is called electrolysis. The cathode half reaction is:
2 H+
+ 2
The anode half reaction is:
2 H
+ 4 H+
+ 4
The gases produced bubble to the surface, where they can be collected. The standard potential of the water electrolysis cell (when heat is added to the reaction) is a minimum of 1.23 V at 25 °C. The operating potential is actually 1.48 V (or above) in practical electrolysis when heat input is negligible.
Henry Cavendish showed that water was composed of oxygen and hydrogen in 1781.[80] The first decomposition of water into hydrogen and oxygen, by electrolysis, was done in 1800 by English chemist William Nicholson and Anthony Carlisle.[80][81] In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt showed that water is composed of two parts hydrogen and one part oxygen.[82]
Gilbert Newton Lewis isolated the first sample of pure heavy water in 1933.[83]
The properties of water have historically been used to define various temperature scales. Notably, the Kelvin, Celsius, Rankine, and Fahrenheit scales were, or currently are, defined by the freezing and boiling points of water. The less common scales of Delisle, Newton, Réaumur and Rømer were defined similarly. The triple point of water is a more commonly used standard point today.[84][better source needed]
See also
1. ^ a b Vienna Standard Mean Ocean Water (VSMOW), used for calibration, melts at 273.1500089(10) K (0.000089(10) °C, and boils at 373.1339 K (99.9839 °C). Other isotopic compositions melt or boil at slightly different temperatures.
2. ^ A commonly quoted value of 15.7 used mainly in organic chemistry for the pKa of water is incorrect.[7]
3. ^ Both acid and base names exist for water because it is amphoteric (able to react both as an acid or an alkali)
4. ^ (1-0.95865/1.00000) × 100% = 4.135%
5. ^ Adiabatic cooling resulting from the ideal gas law
6. ^ The source gives it as 0.0072°C/atm. However the author defines an atmosphere as 1,000,000 dynes/cm2 (a bar). Using the standard definition of atmosphere, 1,013,250 dynes/cm2, it works out to 0.0073°C/atm
7. ^ Using the fact that 0.5/0.0073 = 68.5
1. ^ "Definition of Hydrol". Merriam-Webster. (subscription required (help)).
2. ^ a b Braun, Charles L.; Smirnov, Sergei N. (1993-08-01). "Why is water blue?". Journal of Chemical Education. 70 (8): 612. Bibcode:1993JChEd..70..612B. doi:10.1021/ed070p612. ISSN 0021-9584.
3. ^ Water in Linstrom, P.J.; Mallard, W.G. (eds.) NIST Chemistry WebBook, NIST Standard Reference Database Number 69. National Institute of Standards and Technology, Gaithersburg MD. (retrieved 2016-5-27)
4. ^ a b c Anatolievich, Kiper Ruslan. "Properties of substance: water".
5. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. Vapor Pressure of Water From 0 to 370° C in Sec. 6. ISBN 9780849304842.
6. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. Chapter 8: Dissociation Constants of Inorganic Acids and Bases. ISBN 9780849304842.
7. ^ "What is the pKa of Water". University of California, Davis.
8. ^ Ramires, Maria L. V.; Castro, Carlos A. Nieto de; Nagasaka, Yuchi; Nagashima, Akira; Assael, Marc J.; Wakeham, William A. (1995-05-01). "Standard Reference Data for the Thermal Conductivity of Water". Journal of Physical and Chemical Reference Data. 24 (3): 1377–1381. doi:10.1063/1.555963. ISSN 0047-2689.
9. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 8—Concentrative Properties of Aqueous Solutions: Density, Refractive Index, Freezing Point Depression, and Viscosity. ISBN 9780849304842.
10. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 6.186. ISBN 9780849304842.
11. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 9—Dipole Moments. ISBN 9780849304842.
12. ^ a b c Water in Linstrom, P.J.; Mallard, W.G. (eds.) NIST Chemistry WebBook, NIST Standard Reference Database Number 69. National Institute of Standards and Technology, Gaithersburg MD. (retrieved 2014-06-01)
13. ^ Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 620. ISBN 0-08-037941-9.
14. ^ "Water, the Universal Solvent". USGS.
15. ^ Reece, Jane B. (31 October 2013). Campbell Biology (10 ed.). Pearson. p. 48. ISBN 9780321775658.
16. ^ Reece, Jane B. (31 October 2013). Campbell Biology (10 ed.). Pearson. p. 44. ISBN 9780321775658.
17. ^ a b Leigh, G. J.; et al. (1998). Principles of chemical nomenclature: a guide to IUPAC recommendations (PDF). Blackwell Science Ltd, UK. p. 34. ISBN 0-86542-685-6. Archived (PDF) from the original on 2011-07-26.
18. ^ Nomenclature of Inorganic Chemistry: IUPAC Recommendations 2005 (PDF). Royal Society of Chemistry. 22 Nov 2005. p. 85. ISBN 978-0-85404-438-2. Retrieved 2016-07-31.
19. ^ Leigh, G. J.; et al. (1998). Principles of chemical nomenclature: a guide to IUPAC recommendations (PDF). IUPAC, Commission on Nomenclature of Organic Chemistry. Blackwell Science Ltd, UK. p. 99. ISBN 0-86542-685-6. Archived (PDF) from the original on 2011-07-26.
20. ^ "Tetrahydropyran". Pubchem. National Institutes of Health. Retrieved 2016-07-31.
21. ^ "Compound Summary for CID 22247451". Pubchem Compound Database. National Center for Biotechnology Information.
22. ^ Leigh, G. J.; et al. (1998). Principles of chemical nomenclature: a guide to IUPAC recommendations (PDF). Blackwell Science Ltd, UK. pp. 27–28. ISBN 0-86542-685-6. Archived (PDF) from the original on 2011-07-26.
24. ^ "Water (Code C65147)". NCI Thesaurus. National Cancer Institute. Retrieved 2016-08-01.
25. ^ Smith, Jared D.; Christopher D. Cappa; Kevin R. Wilson; Ronald C. Cohen; Phillip L. Geissler; Richard J. Saykally (2005). "Unified description of temperature-dependent hydrogen bond rearrangements in liquid water" (PDF). Proc. Natl. Acad. Sci. USA. 102 (40): 14171–14174. Bibcode:2005PNAS..10214171S. doi:10.1073/pnas.0506899102. PMC 1242322Freely accessible. PMID 16179387.
26. ^ Deguchi, Shigeru; Tsujii, Kaoru (2007-06-19). "Supercritical water: a fascinating medium for soft matter". Soft Matter. 3 (7): 797. Bibcode:2007SMat....3..797D. doi:10.1039/b611584e. ISSN 1744-6848.
27. ^ NASA – Oceans of Climate Change. (2009-04-22). Retrieved on 2011-11-22.
28. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. Chapter 6: Properties of Ice and Supercooled Water. ISBN 9780849304842.
29. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 6. Properties of Water and Steam as a Function of Temperature and Pressure. ISBN 9780849304842.
30. ^ "Decree on weights and measures". April 7, 1795. Gramme, le poids absolu d'un volume d'eau pure égal au cube de la centième partie du mètre, et à la température de la glace fondante.
31. ^ a b c d Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 625. ISBN 0-08-037941-9.
32. ^ Shell, Scott M.; Debenedetti, Pablo G. & Panagiotopoulos, Athanassios Z. (2002). "Molecular structural order and anomalies in liquid silica" (PDF). Phys. Rev. E. 66: 011202. arXiv:cond-mat/0203383Freely accessible. Bibcode:2002PhRvE..66a1202S. doi:10.1103/PhysRevE.66.011202.
33. ^ a b c d e Perlman, Howard. "Water Density". The USGS Water Science School. Retrieved 2016-06-03.
34. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 938. ISBN 978-1-13-361109-7.
35. ^ Loerting, Thomas; Salzmann, Christoph; Kohl, Ingrid; Mayer, Erwin; Hallbrucker, Andreas (2001-01-01). "A second distinct structural "state" of high-density amorphous ice at 77 K and 1 bar". Physical Chemistry Chemical Physics. 3 (24): 5355–5357. doi:10.1039/b108676f. ISSN 1463-9084.
36. ^ Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 624. ISBN 0-08-037941-9.
37. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 493. ISBN 978-1-13-361109-7.
38. ^ a b c "Can the ocean freeze?". National Ocean Service. National Oceanic and Atmospheric Administration. Retrieved 2016-06-09.
39. ^ Fine, R.A. & Millero, F.J. (1973). "Compressibility of water as a function of temperature and pressure". Journal of Chemical Physics. 59 (10): 5529. Bibcode:1973JChPh..59.5529F. doi:10.1063/1.1679903.
40. ^ a b Nave, R. "Bulk Elastic Properties". HyperPhysics. Georgia State University. Retrieved 2007-10-26.
42. ^ Zeitler, W.; Ohlhof, T.; Ebner, H. (2000). "Recomputation of the global Mars control-point network" (PDF). Photogrammetric Engineering & Remote Sensing. 66 (2): 155–161. Retrieved 2009-12-26.
43. ^ Schlüter, Oliver (2003-07-28). "Impact of High Pressure — Low Temperature Processes on Cellular Materials Related to Foods" (PDF). Technischen Universität Berlin.
44. ^ Tammann, Gustav H.J.A (1925). "The States Of Aggregation". Constable And Company.
45. ^ Lewis, William C.M. & Rice, James (1922). A System of Physical Chemistry. Longmans, Green and Co.
46. ^ Debenedetti, P. G. & Stanley, H. E. (2003). "Supercooled and Glassy Water" (PDF). Physics Today. 56 (6): 40–46. Bibcode:2003PhT....56f..40D. doi:10.1063/1.1595053.
47. ^ Sharp, Robert Phillip (1988-11-25). Living Ice: Understanding Glaciers and Glaciation. Cambridge University Press. p. 27. ISBN 0-521-33009-2.
48. ^ "Revised Release on the Pressure along the Melting and Sublimation Curves of Ordinary Water Substance" (PDF). IAPWS. September 2011. Retrieved 2013-02-19.
49. ^ a b Light, Truman S.; Licht, Stuart; Bevilacqua, Anthony C.; Morash, Kenneth R. (2005-01-01). "The Fundamental Conductivity and Resistivity of Water". Electrochemical and Solid-State Letters. 8 (1): E16–E19. doi:10.1149/1.1836121. ISSN 1099-0062.
50. ^ Crofts, A. (1996). "Lecture 12: Proton Conduction, Stoichiometry". University of Illinois at Urbana-Champaign. Retrieved 2009-12-06.
51. ^ Hoy, AR; Bunker, PR (1979). "A precise solution of the rotation bending Schrödinger equation for a triatomic molecule with application to the water molecule". Journal of Molecular Spectroscopy. 74: 1–8. Bibcode:1979JMoSp..74....1H. doi:10.1016/0022-2852(79)90019-5.
52. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 393. ISBN 978-1-13-361109-7.
53. ^ Campbell, Mary K. & Farrell, Shawn O. (2007). Biochemistry (6th ed.). Cengage Learning. pp. 37–38. ISBN 978-0-495-39041-1.
54. ^ Ball, Philip (2008). "Water—an enduring mystery". Nature. 452 (7185): 291–292. Bibcode:2008Natur.452..291B. doi:10.1038/452291a. PMID 18354466.
55. ^ Campbell, Neil A. & Reece, Jane B. (2009). Biology (8th ed.). Pearson. p. 47. ISBN 978-0-8053-6844-4.
56. ^ Chiavazzo, Eliodoro; Fasano, Matteo; Asinari, Pietro; Decuzzi, Paolo (2014). "Scaling behaviour for the water transport in nanoconfined geometries". Nature Communications. 5: 4565. Bibcode:2014NatCo...5E4565C. doi:10.1038/ncomms4565.
57. ^ "Physical Forces Organizing Biomolecules" (PDF). Biophysical Society. Archived from the original on August 7, 2007.
58. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Water in Table Surface Tension of Common Liquids. ISBN 9780849304842.
59. ^ Campbell, Neil (2011). Biology. Benjamin Cummings. p. 48. ISBN 0321558235.
60. ^ Pugliano, N. (1992-11-01). "Vibration-Rotation-Tunneling Dynamics in Small Water Clusters". Lawrence Berkeley Lab., CA (United States): 6. doi:10.2172/6642535.
61. ^ Richardson, Jeremy O.; Pérez, Cristóbal; Lobsiger, Simon; Reid, Adam A.; Temelso, Berhane; Shields, George C.; Kisiel, Zbigniew; Wales, David J.; Pate, Brooks H. (2016-03-18). "Concerted hydrogen-bond breaking by quantum tunneling in the water hexamer prism". Science. 351 (6279): 1310–1313. Bibcode:2016Sci...351.1310R. doi:10.1126/science.aae0012. ISSN 0036-8075. PMID 26989250. Retrieved 2016-04-23.
62. ^ Kolesnikov, Alexander I. (2016-04-22). "Quantum Tunneling of Water in Beryl: A New State of the Water Molecule". Physical Review Letters. 116 (16): 167802. Bibcode:2016PhRvL.116p7802K. doi:10.1103/PhysRevLett.116.167802. PMID 27152824. Retrieved 2016-04-23.
63. ^ Hardy, Edme H.; Zygar, Astrid; Zeidler, Manfred D.; Holz, Manfred; Sacher, Frank D. (2001). "Isotope effect on the translational and rotational motion in liquid water and ammonia". J. Chem Phys. 114 (7): 3174–3181. Bibcode:2001JChPh.114.3174H. doi:10.1063/1.1340584.
64. ^ Urey, Harold C.; et al. (15 Mar 1935). "Concerning the Taste of Heavy Water". Science. 81 (2098). New York: The Science Press. p. 273. doi:10.1126/science.81.2098.273-a.
65. ^ "Experimenter Drinks 'Heavy Water' at $5,000 a Quart". Popular Science Monthly. 126 (4). New York: Popular Science Publishing. Apr 1935. p. 17. Retrieved 7 Jan 2011.
66. ^ Müller, Grover C. (June 1937). "Is 'Heavy Water' the Fountain of Youth?". Popular Science Monthly. 130 (6). New York: Popular Science Publishing. pp. 22–23. Retrieved 7 Jan 2011.
67. ^ Miller, Inglis J., Jr.; Mooser, Gregory (Jul 1979). "Taste Responses to Deuterium Oxide". Physiology & Behavior. 23 (1): 69–74. doi:10.1016/0031-9384(79)90124-0.
68. ^ "Guideline on the Use of Fundamental Physical Constants and Basic Constants of Water" (PDF). IAPWS. 2001.
69. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 659. ISBN 978-1-13-361109-7.
70. ^ a b Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 654. ISBN 978-1-13-361109-7.
71. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 984. ISBN 978-1-13-361109-7.
72. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 171. ISBN 978-1-13-361109-7.
73. ^ "Hydrides". Chemwiki. UC Davis. Retrieved 2016-06-25.
74. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 932. ISBN 978-1-13-361109-7.
75. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 936. ISBN 978-1-13-361109-7.
76. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 338. ISBN 978-1-13-361109-7.
77. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 862. ISBN 978-1-13-361109-7.
78. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 981. ISBN 978-1-13-361109-7.
79. ^ Charlot, G. (2007). Qualitative Inorganic Analysis. Read Books. p. 275. ISBN 1-4067-4789-0.
80. ^ a b Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 601. ISBN 0-08-037941-9.
81. ^ "Enterprise and electrolysis...". Royal Society of Chemistry. August 2003. Retrieved 2016-06-24.
82. ^ "Joseph Louis Gay-Lussac, French chemist (1778–1850)". 1902 Encyclopedia. Footnote 122-1. Retrieved 2016-05-26.
83. ^ Lewis, G. N.; MacDonald, R. T. (1933). "Concentration of H2 Isotope". The Journal of Chemical Physics. 1 (6): 341. Bibcode:1933JChPh...1..341L. doi:10.1063/1.1749300.
84. ^ A Brief History of Temperature Measurement. Retrieved on 2011-11-22.
External links
Wikiversity has small "student" steam tables suitable for classroom use. |
214d27a7e81f90ea | AQME Advancing Quantum Mechanics for Engineers
by Tain Lee Barzso, Dragica Vasileska, Gerhard Klimeck
Introduction to Advancing Quantum Mechanics for Engineers and Physicists
“Advancing Quantum Mechanics for Engineers” (AQME) toolbox is an assemblage of individually authored tools that, used in concert, offer educators and students a one-stop-shop for semiconductor education. The AQME toolbox holds a set of easily employable nanoHUB tools appropriate for teaching a quantum mechanics class in either engineering or physics. Users no longer have to search the nanoHUB to find the appropriate applications for discovery that are related to quantum mechanics; users, both instructors and students, can simply log in and take advantage of the assembled tools and associated materials such as homework or project assignments.
Thanks to its contributors, nanoHUB users and AQME’s toolbox have benefited tremendously from the hard work invested in tools development. Simulation runs performed using the AQME tools are credited to the individual tools, and count toward individual tool rankings. Uses of individual tools within the AQME tool set are also counted, to measure AQME impact and to improve the tool. On their respective pages, the individual tools are linked to the AQME toolbox.
Participation in this open source, interactive educational initiative is vital to its success, and all nanoHUB users can:
• Contribute content to AQME by uploading it to the nanoHUB. (See “Contribute>Contribute Content” on the nanoHUB mainpage.) Tagging contributions with “AQME” will effect an association with this initiative and, because the toolbox is actively managed, such contributions may also may be added to the toolbox.
• Provide feedback for the items you use in AQME and on through the review system. (Please be explicit and provide constructive feedback.)
• Let us know when things do not work by filing a ticket via the nanoHUB “Help” feature on every page.
• Finally, let us know what you are doing and submityour suggestions improving the nanoHUB by using the “Feedback” section, which you can find under “Support
Finally, be sure to share AQME and other nanoHUB success stories; the nanotechnology community and its supporters need to hear of nanoHUB’s impact.
Discovery that is Possible through Quantum Mechanics
Nanotechnology has yielded a number of unique structures that are not found readily in nature. Most demonstrate an essential quality of Quantum Mechanics known as quantum confinement. Confinement is the idea of keeping electrons trapped in a small area, about 30 nm or smaller. Quantum confinement comes in several dimensions. 2-D confinement, for example, is restricted in only one dimension, resulting in a quantum well (or plane). Lasers are currently built from this dimension. 1-D confinement occurs in nanowires, and 0-D confinement is found only in the quantum dot.
The study of quantum confinement leads, foremost, to electronic properties not found in today’s semiconductor devices. The quantum dot works well as a first example. The typical quantum dot is anywhere between 3-60 nm in diameter. That’s still 30 to 600 times the size of a typical atom. A quantum dot exhibits 0-D confinement, meaning that electrons are confined in all three dimensions. In nature, only atoms have 0-D confinement; thus, a quantum dot can be described loosely as an ‘artificial atom.’ This knowledge is vitally important, as atoms are too small and too difficult to isolate in experiments. Conversely, quantum dots are large enough to be manipulated by magnetic fields and can even be moved around with an STM or AFM. We can deduce many important atomistic characteristics from a quantum dot that would otherwise be impossible to research in an atom.
Confinement also increases the efficiency of today’s electronics. The laser is based on a 2-D confinement layer that is usually created with some form of epitaxy such as Molecular Beam Epitaxy or Chemical Vapor Deposition. The bulk of modern lasers created with this method are highly functional, but these lasers are ultimately inefficient in terms of energy consumption and heat dissipation. Moving to 1-D confinement in wires or 0-D confinement in quantum dots allows for higher efficiencies and brighter lasers. Quantum dot lasers are currently the best lasers available, although their fabrication is still being worked out.
Confinement is just one manifestation of quantum mechanics in nanodevices. Tunneling and quantum interference are two other manifestations of quantum mechanics in the operation of scanning tunneling microscopes and resonant tunneling diodes, respectively. For more information on the theoretical aspects of Quantum Mechanics check the following resources:
Quantum Mechanics for Engineers: Podcasts
Quantum Mechanics for Engineers: Course Assignments
Because understanding quantum mechanics is so foundational to an understanding of the operation of nanoscale devices, almost every Electrical Engineering department (in which there is a strong nanotechnology experimental or theoretical group) and all Physics departments teach the fundamental principles of quantum mechanics and their application to nanodevice research. Several conceptual sets and theories are taught within these courses. Normally, students are first introduced to the concept of particle-wave duality (the photoelectric effect and the double-slit experiment), the solutions of the time-independent Schrödinger equation for open systems (piece-wise constant potentials), tunneling, and bound states. The description of the solution of the Schrödinger equation for periodic potentials (Kronig-Penney model) naturally follows from the discussion of double well, triple well and n-well structures. This leads the students to the concept of energy bands and energy gaps, and the concept of the effective mass that can be extracted from the pre-calculated band structure by fitting the curvature of the bands. The Tsu-Esaki formula is then investigated so that, having calculated the transmission coefficient, students can calculate the tunneling current in resonant tunneling diode and Esaki diode. After establishing basic principles of quantum mechanics, the harmonic oscillator problem is then discussed in conjunction with understanding vibrations of a crystalline lattice, and the idea of phonons is introduced as well as the concept of creation and annihilation operators. The typical quantum mechanics class for undergraduate/first-year graduate students is then completed with the discussion of the stationary and time-dependent perturbation theory and the derivation of the Fermi Golden Rule, which is used as a starting point of a graduate level class in semiclassical transport. Coulomb Blockade is another discussion a typical quantum mechanics class will include.
Particle-Wave Duality
pic1_duality.png A wave-particle dual nature was discovered and publicized in the early debate about whether light was composed of particles or wave. Evidence for the description of light-as-waves was well established at the turn of the century when the photoelectric effect introduced firm evidence of a light-as-particle nature. This dual nature was found to also be characteristic of electrons. Electron particle nature properties were well documented when the DeBroglie hypothesis, and subsequent experiments by Davisson and Germer, established the wave nature of the electron.
Particle-Wave Duality: an Animation
This movie helps students to better distinguish when nano-things behave as particles and when they behave as waves. The link below connects to an exercise on these concepts.
Introductory Concepts in Quantum Mechanics: an Exercise
Solution of the Time-Independent Schrödinger Equation
Piece-Wise Linear Barrier Tool in AQME – Open Systems
pcpbt1.bmp pcpbt2.bmp pcpbt3.bmp
The Piece-Wise Linear Barrier Tool in AQME allows calculation of the transmission and the reflection coefficient of arbitrary five, seven, nine, eleven and 2n-segment piece-wise constant potential energy profile. For the case of multi-well structure it also calculates the quasi-bound states so it can be used as a simple demonstration tool for the formation of energy bands. Also, it can be used in the case of stationary perturbation theory exercises to test the validity of, for example, the first order and the second order correction to the ground state energy of the system due to small perturbations of, for example, the confining potential. The Piece-Wise Linear Barrier Tool in AQME can also be used to test the validity of the WKB approximation for triangular potential barriers.
Available resources:
Bound States Lab in AQME
The Bound States Lab in AQME determines the bound states and the corresponding wavefunctions in a square, harmonic, and triangular potential well. The maximum number of eigenstates that can be calculated is 100. Students clearly see the nature of the separation of the states in these three prototypical confining potentials, with which students can approximate realistic quantum potentials that occur in nature.
The panel below (left) shows energy eigenstates of a harmonic oscillator. Probability density of the ground state that demonstrates purely quantum-mechanical behavior is shown in the middle panel below. Probability density of the 20th subband demonstrates the more classical behavior as the well opens (right panel below).
pic6_state1top.png pic7_state2left.png pic8_state3right.png
Available resources:
Energy Bands and Effective Masses
Periodic Potential Lab in AQME
pic10_perpot2.png pic9_perpot1.png The Periodic Potential Lab in AQME solves the time-independent Schrödinger Equation in a 1-D spatial potential variation. Rectangular, triangular, parabolic (harmonic), and Coulomb potential confinements can be considered. The user can determine energetic and spatial details of the potential profiles, compute the allowed and forbidden bands, plot the bands in a compact and an expanded zone, and compare the results against a simple effective mass parabolic band. Transmission is also calculated. This lab also allows the students to become familiar with the reduced zone and expanded zone representation of the dispersion relation (E-k relation for carriers).
Available resources:
Periodic Potentials and Bandstructure: an Exercise
Band Structure Lab in AQME
pic12_band2.png pic11_band1.png Band structure of Si (left panel) and GaAs (right panel).
In solid-state physics, the electronic band structure (or simply band structure) of a solid describes ranges of energy that an electron is “forbidden” or “allowed” to have. It is due to the diffraction of the quantum mechanical electron waves in the periodic crystal lattice. The band structure of a material determines several characteristics, in particular, its electronic and optical properties. The Band Structure Lab in AQME enables the study of bulk dispersion relationships of Si, GaAs, InAs. Plotting the full dispersion relation of different materials, students first get familiar with a band structure of a direct band gap (GaAs, InAs), as well as indirect band gap semiconductors (Si). For the case of multiple conduction band valleys, students must first determine the Miller indices of one of the equivalent valleys, then, from that information they can deduce how many equivalent conduction bands are in Si and Ge, for example. In advanced applications, the users can apply tensile and compressive strain and observe the variation in the band structure, band gaps, and effective masses. Advanced users can also study band structure effects in ultra-scaled (thin body) quantum wells, and nanowires of different cross sections. Band Structure Lab uses the sp3s*d5 tight-binding method to compute E(k) for bulk, planar, and nanowire semiconductors.
Available resource:
Bulk Band Structure: a Simulation Exercise
diamond.png The figure on the left illustrates the first Brillouin zone of FCC lattice that corresponds to the first Brillouin zone for all diamond and Zinc-blende materials (C, Si, Ge, GaAs, InAs, CdTe, etc.). There are 8 hexagonal faces (normal to 111) and 6 square faces (normal to 100). The sides of each hexagon and each square are equal.
Supplemental Information: Specification of High-Symmetry Points
Symbol Description
Γ Center of the Brillouin zone
Simple Cube
M Center of an edge
R Corner point
X Center of a face
Face-Centered Cubic
K Middle of an edge joining two hexagonal faces
L Center of a hexagonal face
U Middle of an edge joining a hexagonal and a square face
W Corner point
X Center of a square face
Body-Centered Cubic
H Corner point joining four edges
N Center of a face
P Corner point joining three edges
A Center of a hexagonal face
H Corner point
K Middle of an edge joining two rectangular faces
L Middle of an edge joining a hexagonal and a rectangular face
M Center of a rectangular face
Real World Applications
Schred Tool in AQME
pic13_schred1.png pic14_schred2.png pic15_schred3.png
The Schred Tool in AQME calculates the envelope wavefunctions and the corresponding bound-state energies in a typical MOS (Metal-Oxide-Semiconductor) or SOS (Semiconductor-Oxide- Semiconductor) structure and in a typical SOI structure by solving self-consistently the one-dimensional (1-D) Poisson equation and the 1D Schrödinger equation. The Schred tool is specifically designed for Si/SiO2 interface and takes into account the mass anisotropy of the conduction bands, as well as different crystallographic orientations.
Available resources:
1-D Heterostructure Tool AQME
The 1-D Heterostructure Tool AQME simulates confined states in 1-D heterostructures by calculating charge self-consistently in the confined states, based on a quantum mechanical description of the one dimensional device. The greater interest in HEMT devices is motivated by the limits that will be reached with scaling of conventional transistors. The 1D Heterostructure Tool in that respect is a very valuable tool for the design of HEMT devices as one can determine, for example, the position and the magnitude of the delta-doped layer, the thickness of the barrier and the spacer layer for which one maximizes the amount of free carriers in the channel which, in turn, leads to larger drive current. This is clearly illustrated in the examples below.
1dhet1.png 1dhet2.png
Available resources:
Resonant Tunneling Diode Lab in AQME
rtd1.png rtd2.png
Put a potential barrier in the path of electrons, and it will block their flow; but, if the barrier is thin enough, electrons can tunnel right through due to quantum mechanical effects. It is even more surprising that, if two or more thin barriers are placed closely together, electrons will bounce between the barriers, and, at certain resonant energies, flow right through the barriers as if there were none. Run the Resonant Tunneling Diode Lab in AQME, which lets you control the number of barriers and their material properties, and then simulate current as a function of bias. Devices exhibit a surprising negative differential resistance, even at room temperature. This tool can be run online in your web browser as an active demo.
pic18_restunn.png pic19_restun2.png
Available resources:
Quantum Dot Lab in AQME
pic25_qdot.png Quantum Dot Lab in AQME computes the eigenstates of a particle in a box of various shapes, including domes and pyramids.
Available resources:
Scattering and Fermi’s Golden Rule
Scattering is a general physical process whereby some forms of radiation, such as light, sound, or moving particles are forced to deviate from a straight trajectory by one or more localized non-uniformities in the medium through which they pass. In conventional use, scattering also includes deviation of reflected radiation from the angle predicted by the law of reflection. Reflections that undergo scattering are often called diffuse reflections, and unscattered reflections are called specular (mirror-like) reflections. The types of non-uniformities (sometimes known as scatterers or scattering centers) that can cause scattering are too numerous to list, but a small sample includes particles, bubbles, droplets, density fluctuations in fluids, defects in crystalline solids, surface roughness, cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory. In quantum physics, Fermi’s golden rule is a way to calculate the transition rate (probability of transition per unit time) from one energy eigenstate of a quantum system into a continuum of energy eigenstates, due to a perturbation. The Bulk Monte-Carlo Lab in AQME calculates the scattering rates dependence versus electron energy of the most important scattering mechanisms for the most commonly used materials in the semiconductor industry, such as Si, Ge, GaAs, InSb, GaN, SiC. For proper parameter set for, for example, 4H SiC please refer to the following article.
Available Resources:
Coulomb Blockade
The Coulomb Blockade Lab in AQME allows simulation of non-linear current-voltage (I-V) characteristics through single and double quantum dots and as such illustrates various single electron transport phenomena.
Available resources:
Users no longer have to search the nanoHUB to find the appropriate applications for discovery that are related to quantum mechanics; users, both instructors and students, can simply log in to the AQME toolbox and take advantage of the assembled tools and resources, such as animations, exercises or podcasts.
AQME Constituent Tools
Piece-Wise Constant Potential Barriers Tool
Bound States Calculation Lab
Band Structure Lab
Periodic Potential Lab
1D Heterostructure Tool
Resonant Tunneling Diode Simulator
Quantum Dot Lab
Bulk Monte Carlo Lab
Coulomb Blockade Simulation
Created on , Last modified on |
c845edd8502122e7 | 3 The Schrödinger equation
If the electron in an atom of hydrogen is a standing wave, as de Broglie had assumed, why should it be confined to a circle? After the insight that particles can behave like waves, which came ten years after Bohr’s quantization postulate, it took less than three years for the full-fledged (albeit still non-relativistic) quantum theory to be formulated, not once but twice in different mathematical attire, by Werner Heisenberg in 1925 and by Erwin Schrödinger in 1926.
Let’s take a look at where the Schrödinger equation, the centerpiece of non-relativistic quantum mechanics, comes from. Figure 2.3.1 illustrates the properties of a traveling wave ψ of Amplitude A and phase φ = kx − t. The wavenumber k is defined as 2π/λ; the angular frequency ω is given by 2π/T. Hence we can also write
φ = 2π [(xλ) − (t/T)].
Keeping t constant, we see that a full cycle (2π, corresponding to 360°) is completed if x increases from 0 to the wavelength λ. Keeping x constant, we see that a full cycle is completed if t increases from 0 to the period T. (The reason why 2π corresponds to 360° is that it is the circumference of a circle or unit radius.)
Figure 2.3.1 The slanted lines represent the alternating crests and troughs of ψ. The passing of time is indicated by the upward-moving dotted line, which represents the temporal present. It is readily seen that the crests and troughs move toward the right. By focusing on a fixed time, one can see that a cycle (crest to crest, say) completes after a distance λ. By focusing on a fixed place, one can see that a cycle completes after a time T.
The mathematically simplest and most elegant way to describe ψ is to write
ψ = [A:φ] = [A:kx − ωt].
This is a complex number of magnitude A and phase φ. It is also a function ψ(x,t) of one spatial dimension (x) and time t.
We now introduce the operators ∂x and ∂t. While a function is a machine that accepts a number (or several numbers) and returns a (generally different) number (or set of numbers), an operator is a machine that accepts a function and returns a (generally different) function. All we need to know about these operators at this point is that if we insert ψ into ∂x, out pops ikψ, and if we insert ψ into ∂t, out pops −iωψ:
xψ = ikψ, ∂tψ = −iωψ.
If we feed ikψ back into ∂x, out pops (not unexpectedly) (ik)2ψ = −k2ψ. Thus
(∂x)2ψ = −k2ψ.
Using Planck’s relation E = ω and de Broglie’s relation p = h/λ = k to replace ω and k by E and p, we obtain
tψ = −i(E/)ψ, ∂xψ = i(p/)ψ, (∂x)2ψ = −(p/)2ψ,
(2.3.1) Eψ = itψ, pψ = (/i)∂xψ, p2ψ = −2(∂x)2ψ.
We now invoke the classical, non-relativistic relation between the energy E and the momentum p of a freely moving particle,
(2.3.2) E = p2/2m,
where m is the particle’s mass. We shall discover the origin of this relation when taking on the relativistic theory. The right-hand side is the particle’s kinetic energy.
Multiplying Eq. (2.3.2) by ψ and using Eqs. (2.3.1), we get
(2.3.3) itψ = −(2/2m) (∂x)2ψ.
This is the Schrödinger equation for a freely moving particle with one degree of freedom — a particle capable of moving freely up and down the x-axis. We shouldn’t be surprised to find that Eq. (2.3.3) imposes the following constraint on ψ:
(2.3.4) ω = k2/2m.
This is nothing else than Eq. (2.3.2) with E and p replaced by ω and k according to the relations of Planck and de Broglie.
We have started with a specific wave function ψ. What does the general solution of Eq. (2.3.3) look like? The question is readily answered by taking the following into account: If ψ1 and ψ2 are solutions of Eq. (2.3.3), then for any pair of complex numbers a,b the function ψ = aψ1 + bψ2 is another solution. The general solution, accordingly, is
(2.3.5) ψ(x,t) = (1/√(2π)) ∫dk [a(k):kx − ω(k)t].
The factor (1/√(2π)) ensures that the probabilities calculated with the help of ψ are normalized (that is, the probabilities of all possible outcomes of any given measurement add up to 1). The symbol ∫dk indicates a summation over all values of k from k=−∞ to k=+∞: every value contributes a complex number a(k)[1:kx − ω(k)t], where ω(k) is given by Eq. (2.3.4).
If the particle is moving under the influence of a potential V, the potential energy qV (q being the particle’s charge) needs to be added to the kinetic energy (the right-hand side of Eq. 2.3.2). The Schrödinger equation then takes the form
(2.3.6) itψ = −(2/2m) (∂x)2ψ + qVψ.
Its generalization to three-dimensional space is now straightforward:
(2.3.7) itψ = −(2/2m) [(∂x)2 + (∂y)2 + (∂z)2]ψ + qVψ. |
3105a63a0cd88043 | Support Options
Submit a Support Ticket
Quantum Mechanics: Hydrogen Atom and Electron Spin
By Dragica Vasileska1, Gerhard Klimeck2
1. Arizona State University 2. Purdue University
View Series
Slides/Notes podcast
Licensed according to this deed.
Published on
A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral atom contains a single positively-charged proton and a single negatively-charged electron bound to the nucleus by the Coulomb force. The most abundant isotope, hydrogen-1, protium, or light hydrogen, contains no neutrons; other isotopes contain one or more neutrons. This article primarily concerns hydrogen-1.
The hydrogen atom has special significance in quantum mechanics and quantum field theory as a simple two-body problem physical system which has yielded many simple analytical solutions in closed-form.
In 1913, Niels Bohr obtained the spectral frequencies of the hydrogen atom after making a number of simplifying assumptions. These assumptions, the cornerstones of the Bohr model, were not fully correct but did yield the correct energy answers. Bohr's results for the frequencies and underlying energy values were confirmed by the full quantum-mechanical analysis which uses the Schrödinger equation, as was shown in 1925/26. The solution to the Schrödinger equation for hydrogen is analytical. From this, the hydrogen energy levels and thus the frequencies of the hydrogen spectral lines can be calculated. The solution of the Schrödinger equation goes much further than the Bohr model however, because it also yields the shape of the electron's wave function ("orbital") for the various possible quantum-mechanical states, thus explaining the anisotropic character of atomic bonds.
The Schrödinger equation also applies to more complicated atoms and molecules. However, in most such cases the solution is not analytical and either computer calculations are necessary or simplifying assumptions must be made. Solution of the Schrodinger equation for the hydrogen atom is provided below:
• Slides on the Solution of the Schrodinger equation for Hydrogen atom
• In physics and chemistry, spin refers to a non-classical kind of angular momentum intrinsic to a body, as opposed to orbital angular momentum, which is the motion of its center of mass about an external point. A particle's spin is essentially the direction a particle turns along a given axis, which in turn can be used to determine the particle's magneticism.[1] Although this special property is only explained in the relativistic quantum mechanics of Paul Dirac, it plays a most-important role already in non-relativistic quantum mechanics, e.g., it essentially determines the structure of atoms.
In classical mechanics, any spin angular momentum of a body is associated with self rotation, e.g., the rotation of the body around its own center of mass. For example, the spin of the Earth is associated with its daily rotation about the polar axis. On the other hand, the orbital angular momentum of the Earth is associated with its annual motion around the Sun.
In fact, in classical theories there is no analogue to the quantum mechanical property meant by the name spin. The concept of this nonclassical property of elementary particles was first proposed in 1925 by Ralph Kronig, George Uhlenbeck, and Samuel Goudsmit; but the name related to the phenomenon of spin in physics is Wolfgang Pauli.
Applications of Spin in nanoelectronics are given in the presentation slides below:
• The story of the two spins
• Cite this work
Researchers should cite this work as follows:
• Dragica Vasileska; Gerhard Klimeck (2008), "Quantum Mechanics: Hydrogen Atom and Electron Spin,"
BibTex | EndNote
In This Series
1. Quantum Mechanics: Hydrogen Atom
09 Jul 2008 | Teaching Materials | Contributor(s): Dragica Vasileska, Gerhard Klimeck
2. Quantum Mechanics: The story of the electron spin
One of the most remarkable discoveries associated with quantum physics is the fact that elementary particles can possess non-zero spin. Elementary particles are particles that cannot be divided into any smaller units, such as the photon, the electron, and the various quarks. Theoretical and..., a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. |
f1cb722fedf6486d | Frank Wilczek
From Wikiquote
Jump to: navigation, search
Frank Wilczek
Frank Anthony Wilczek (born May 15, 1951) is an American physicist and Nobel laureate (2004).
• The answer to the ancient question "Why is there something rather than nothing?" would then be that ‘nothing’ is unstable.
Longing for the Harmonies: Themes and Variations from Modern Physics (1987)[edit]
co-authored with Betsy Devine
• The bases of music are rhythm and harmony. Rhythm is ordered recurrence in time... As the planets move around the sun, they repeat their orbits periodically; thus there is already a primitive kind of rhythm in their motion. ...Harmony ...can be considered a special kind of rhythm. ...pure musical tones are produced when the vibrations are... periodic or... repeat themselves regularly in time. Two tones harmonize if their intervals of repetition are in rhythm—or, in mathematical language, if their periods are in proportion. Kepler... in the third book of Harmonice mundi... attempted to make other... related, connections between musical harmony and mathematical proportion.
Sodium visible light spectrum
• So let us listen to the light—what music do we hear? For one thing, we can elicit from each chemical element its own, unique chord. You may sometimes have noticed that a bright yellow flash is produced if ordinary table salt is sprinkled on a flame...a first bare hint of the subject of flame spectra... The fact that different elements emit light with different color characteristics is exploited by the makers of fireworks.
• If you pass the light from a sodium flash through a prism, you get a pattern very different from the familiar continuous rainbow that Newton elicited from natural sunlight. Instead of a continuous pattern, in which all gradations of pure color are apparently represented, the sodium flash generates a series of lines of light. the musical analogy, sodium produces a chord where sunlight produced all possible tones—"white noise." Other elements produce other chords.
• To understand this radiation [ cosmic microwave background ], it is easier to begin thinking about the radiation from a very hot gas like that inside a neon light. The same neon... is, at room temperature, utterly transparent... The character of matter in general changes abruptly when it gets heated above 3,000 degrees or so. Below this temperature, matter is electrically neutral... At high temperatures in a neon light, the electrically charged pieces of atoms become unstuck. Frequent and violent collisions break down neutral atoms into electrons and unbalanced nuclei. Matter in this state is called plasma, and it radiates much of its collision energy in the form of light. ...a gas of neutral atoms (like air) is virtually transparent. The free [charged] nuclei and electrons of plasma, by contrast, couple to light's electromagnetic fields and absorb it very efficiently. ...You ...see light only from the borderline layer of neon between opaque plasma and transparent neutral atoms.
• Sam Treiman... has quoted something he called Treiman's theorem... Impossible things usually don't happen. ...With the discovery of radioactivity... it suddenly became apparent that the "impossible" was happening all the time. Uranium, thorium, radium... fit all the requirements of chemical elements. They could not be broken down by any of the standard methods... But occasionally... atoms of these elements spontaneously changed into other kinds of atoms. ...So what is left of the doctrine of the elements? Is alchemy reinstated? Not at all. The point is that the doctrine fails only under rare or special conditions. ...We can isolate the conditions in which they do, and retain a more restricted but still useful concept of the "impossible."
• As the idea of permanence of objects has faded, the idea of permanence of physical laws has become better established and more powerful.
• What is conserved, in modern physics, is not any particular substance or material but only much more abstract entities such as energy, momentum, and electric charge. The permanent aspects of reality are not particular materials or structures but rather the possible forms of structures and the rules for their transformation.
• What should be most significant to us are not physical artifacts, but the meaning they embody. ...whenever we create paintings, songs, poems, books, computer programs—or ideas in the minds of children—we do something of this sort.
• The most abstract conservation laws of physics come into their being in describing equilibrium in the most extreme conditions. They are the most rigorous conservation laws, the last to break down. The more extreme the conditions, the fewer the conserved structures... In a deep sense, we understand the interior of the sun better that the interior of the earth, and the early stages of the big bang best of all.
• Why can stars do better than the big bang? ...During the big bang, there were only a few minutes when nuclei could form. Very rare processes, or slow ones, played little role. A case in point is the key process from which the sun derives its energy. In this reaction, two protons collide to produce a deuterium nucleus, a neutrino, and a positron. ...This reaction belongs to the family of weak interactions. ...It remains... a remarkable—and for humanity, remarkably fortunate—circumstance that the central reaction that drives the sun is so rare. It is only this extraordinary rarity that allows the average proton in the sun to last so long, billions of years, even though it is colliding with other protons millions of times a second. entertaining example of Treiman's theorem.
• Once helium burning has occurred... the next possible reaction—carbon burning—is not necessarily slow... This reaction involves ...a strong as opposed to a weak interaction. ...Carbon burning results in magnesium. ...Taking a cross section of a highly evolved star would reveal a system of many layers. The inner layers have been subjected to the largest pressures, thereby forced to the highest temperatures, and burned the furthest; the outermost layers, by contrast, have not burned at all. Thus, as we proceed from outside in, there will be an outermost layer with the initial mix of hydrogen and helium, a layer of mostly helium, a layer of carbon, a layer of magnesium, and so on. ...So we arrive at the picture of a star, in the latest stages of its evolution... now composed of mostly carbon nuclei and other explosive material.
• It is delightful in itself when we are able to interpret features of the present as signs confirming our understanding of the past.
• In a sense, all of Earth glows in the dark. The energy release from natural radioactivity, the lingering fluorescence of stellar explosions, keeps Earth dynamic. It melts the core and keeps it flowing, and heats the crust and mantle, with consequences ranging from the generation of earth's magnetic field to earthquakes and the motion of continents. By contrast, Luna and Mars, because they are smaller, derive less energy from radioactive decays. Geologically, they are dead.
• The result will be points of quiescence—technically known as nodes—where the air's density varies not at all, and no sound is heard. Note the paradox here: either sphere alone creates a sound wave at this point; two spheres together add up to no sound there at all. Two sources can add up to give less than one. This is the essence of destructive interference. (When two sources are giving the same instruction, the resulting vibration bears not twice but four times the energy. This phenomenon, oxymoronically known as constructive interference, may seem puzzling.)
• In science... the ultimate judges are not experts but experiments.
• Particles that, like 4He, show constructive interference are said to be bosons—a shorthand term for "particles obeying Bose–Einstein statistics." …One way to recognize bosons is their tendency to imitate one each other. ...the presence of one boson increases the chance that another of its identical siblings will also appear in the same spot. There's an attraction between them. We will speak ...of an attractive identity force drawing together identical bosons. Lasers are a spectacular example...
• When two identical 3He atoms collide... the interference is destructive. Particles that behave like 3He atoms are called fermions, short for "particles obeying Fermi–Dirac statistics." ...while bosons imitate one another... the "identity force" between fermions acts like a repulsion, and the probability of finding a fermion at some point in space is reduced if some of its identical siblings are nearby. ...It is the repulsive identity forces between electrons that support white dwarf stars... against their own gravity.
• There is a simple rule for composite objects, such as nuclei or atoms. The rule is that if such an object contains an odd number of fermions, the composite object is a fermion. Otherwise, it is a boson. ...this simple rule doesn't care at all about the number of bosons in the composite object.
Rough Graph of Strong, Weak & Electromagnetic Coupling Unification at High Energy or Close Distance of Interaction, based upon Wilczek's "Longing for the Harmonies"
• If grand unified theories are correct, we ought to be able to derive the relative power of the strong, weak, and electromagnetic interactions at accessible energies from their presumed equality at much higher energies. When this is attempted, a wonderful result emerges. the form first calculated by Howard Georgi, Helen Quinn, and Steven Weinberg ...The couplings of strong-interaction gluons decrease, those of the [weak interaction] W bosons stay roughly constant, and those of the [electromagnetic interaction] photons increase at short distances [or high energies]—so they all tend to converge, as desired.
• In the table—and in nature—we find (leaving aside the antineutrino) fifteen fundamental fermions, with diverse strong, weak, and electromagnetic charges. ...They are so closely related by symmetry transformations that they are, so to speak, no more than different faces of the same cube.
• When we view nature stripped to essentials, so to speak, in this Unification Table, what we see is... a five bit register. ...It is in just this form that data is stored and manipulated within a digital computer. ...Every particle, then, can be specified by a five-bit word and stored in a five-bit register. It's eerie that even the odd restriction on how many minus signs are allowed is reminiscent of a trick used in computers, to detect errors in transmission. You see, if allowed words must have an odd number of minus signs, any single error in transmission of a word can be detected. ...our world might be an intricate program working itself out on a gigantic computing machine.
• Thinking along these lines will help prepare us for the day when we—or more likely, our distant descendents—will develop the machinery and cleverness to begin to program [create] worlds ourselves...
• It is quite easy to include a weight for empty space in the equations of gravity. Einstein did so in 1917, introducing what came to be known as the cosmological constant into his equations. His motivation was to construct a static model of the universe. To achieve this, he had to introduce a negative mass density for empty space, which just canceled the average positive density due to matter. With zero total density, gravitational forces can be in static equilibrium. Hubble's subsequent discovery of the expansion of the universe, of course, made Einstein's static model universe obsolete. ...The fact is that to this day we do not understand in a deep way why the vacuum doesn't weigh, or (to say the same thing in another way) why the cosmological constant vanishes, or (to say it in yet another way) why Einstein's greatest blunder was a mistake.
• The main problem with many nonscientific world models is the vigor with which they insist upon their rightness. Once a world model claims to be completely right, it is no longer open to any changes. ...Closed systems can be comforting, but they are limited. ...It's not the best we can do. Neither is extreme "open-mindednesss" that slides into "empty headedness"—the ideal that we can never really know anything.
• The whole idea of science is really to listen to nature, in her own language, as part of a continuing dialogue.
• We have heard that nature can sing some strange and unfamiliar songs. In coming to appreciate these songs, we develop a heightened perception... leavened by an admixture of our own creation...
The Lightness of Being – Mass, Ether and the Unification of Forces (2008)[edit]
• We evolved to be good at learning and using rules of thumb, not at searching for ultimate causes and making fine distinctions. Still less did we evolve to spin out long chains of calculation that connect fundamental laws to observable consequences. Computers are much better at it!
• Ch. 1, p. 8.
• For many centuries before modern science, and for the first two and a half centuries of modern science, the division of reality into matter and light seemed self-evident. ...As long as the separation between the massive and the massless persisted, a unified description of the physical world could not be achieved.
• Ch. 1, p. 9.
• An ordinary mistake is one that leads to a dead end, while a profound mistake is one that leads to progress. Anyone can make an ordinary mistake, but it takes a genius to make a profound mistake.
• Ch. 1, p. 12.
• Different ways of writing the same equation can suggest very different things, even if they are logically equivalent. ...In Einstein's original 1905 paper, you do not find the equation E = mc2. What you find is m = E/c2. ...the title of the paper is a question: "Does the Inertia of a Body Depend on Its Energy Content?" …If we can explain mass in terms of energy, we'll be improving our description of the world. We'll need fewer ingrediants in our world-recipe.
• Ch. 3, p. 19.
• E = mc2 really applies only to isolated bodies at rest. In general, when you have moving bodies, or interacting bodies, energy and mass aren't proportional. E = mc2 simply doesn't apply. ...For moving bodies, the correct mass-energy equation is
where v is the velocity. For a body at rest (v=0), this becomes E = mc2. ...we must consider the special case of particles with zero mass... examples include photons, color gluons, and gravitons. If we attempt to put m = 0 and v = c in our general mass-energy equation, both the numerator and denominator on the right-hand-side vanish, and we get the nonsensical relation E = 0/0. The correct result is that the energy of a photon can take any value. ...The energy E of a photon is proportional to the frequency f of the light it represents. ...they are related by the Planck-Einstein-Schrödinger equation E = hf, where h is Plank's constant.
• Note: when the velocity v approaches the speed of light c, the denominator approaches 0 thus E approaches infinity, unless m = 0.
• Ch. 3, p. 19 & Appendix A
• Knowing how to calculate something is not the same as understanding it. Having a computer to calculate the origin of mass for us may be convincing, but is not satisfying. Fortunately we can understand it too.
• Ch. 10, p. 128.
"Multiversality" (2013)[edit]
arXiv:1307.7376v1 [hep-ph] July 30, 2013
• Intelligent creatures [that] evolved to live deep within the atmosphere of a gas giant planet could be deluded, for eons, into thinking that the Universe is an approximately homogeneous expanse of gas, filling a three-dimensional space, but featuring anisotropic laws of motion (which we would ascribe to the planet’s gravitational field). Are we human scientists comparably blinkered?
• If the “universe” contains everything that exists, what can be outside it? If the answer is “Things that don’t exist”, then “multiverse” becomes an idea in the domain of psychology, not physics.
• The “Copernican Principle” or “Cosmic Mediocrity”... states, basically, that Earth does not occupy a privileged place in the universe. Universality asserts more, namely that there are no privileged places or times.
• In the past scientists have repeatedly reached “intellectual closure” on inadequate pictures of the universe, and underestimated its scale.
• A common habit of thought... is the idea that space is [a] simple receptacle in which bodies move around, with no two bodies present at the same point. ...In modern quantum physics generally, and in the standard model of fundamental physics in particular, physical space appears as a far more flexible framework. Many kinds of particles can be present at the same point in space at the same time. Indeed, the primary ingredients of the standard model are not particles at all, but an abundance of quantum fields, each a complex object in itself, and all omnipresent.
• The traditional “cosmological” Multiverse considers that there might be physical realms inaccessible to us due to their separation in space-time. The quantum Multiverse arises from entities that occupy the same space-time, but are distant in Hilbert space – or in the jargon, decoherent.
• The happy coincidences between life’s requirements and nature’s choices of parameter-values might be just a series of flukes, but one could be forgiven for beginning to suspect that something deeper is at work. That suspicion is the first deep root of anthropic reasoning.
• The phase transition paradigm: The standard model of fundamental physics incorporates, as one of its foundational principles, the idea that “empty space” or “vacuum” can exist in different phases, typically associated with different amounts of symmetry. Moreover, the laws of the standard model itself suggest that phase transitions will occur, as functions of temperature. Extensions of the standard model to build in higher symmetry (gauge unification or especially supersymmetry) can support effective vacua with radically different properties, separated by great distance or by domain walls. That would be a form of failure of universality, in our sense, whose existence is suggested by the standard model.
• In most theoretical embodiments of inflationary cosmology, the currently observed universe appears as a small part of a much larger multiverse. In this framework to hold throughout the universe need not hold through all space. They can be accidents of our local geography, so to speak. If that is so, then it is valid – indeed, necessary – to consider selection effects. It may be that some of the “fundamental constants”, in particular, cannot be determined by theoretical reasoning, even in principle, because they really are different elsewhere.
• To put it crudely, theorists can be tempted to think along the lines “If people as clever as us haven’t explained it, that’s because it can’t be explained – it’s just an accident.” I believe there are at least two important regularities among standard model parameters that do have deeper explanations, namely the unification of couplings and the smallness of the QCD θ parameter. There may well be others.
External links[edit]
Wikipedia has an article about:
Wikimedia Commons has media related to: |
35a97a38705273a5 | Monday, January 30, 2012
LHC Multilepton Search Unimpressive So Far
They looked at events with at least four leptons (electrons and muons) and some missing transverse momentum. In the inclusive selection, they observed 4 events while the expectation was 1.7 ± 0.9 events; that's roughly a 2.5-sigma excess although one should include the non-Gaussianity of the distribution to get a more accurate figure.When the Z veto was imposed, they got no events while the expectation was 0.7 ± 0.8 events which is OK within 1-sigma error. So only the veto-free figure is intriguing.
From here.
Material deviations from the Standard Model, of course, tend to support Beyond the Standard Model theories. And, greater than expected numbers of multilepton events, in particular, tend to support Supersymmetry (i.e. SUSY).
Statistical Issues
But, of course, as Motl acknowledges by remarking that the distribution is non-Gaussian, estimating a statistical significance for a discrete variable with a low value, like the predicted number of multilepton events at an experiment at the Large Hadron Collider, as a continous variable, is inaccurate. The probability of a number of events less than zero is zero, which makes a two sided set of error bars problematic. There are specific, probabilities for zero, one, two, three, four, five, etc. events under the Standard Model, and you have to compare a single data point to those probabilities.
Doing the statistical significance calculations correctly is going to reduce the statistical significance of finding 4 multilepton events, when the modal result would be 2 multilepton events, 1 or 3 multilepton events would be quite routine, and neither 0 nor 4 events is wildly improbable.
With a Z veto, which is designed to filter out false positives, you have a barely modal expectation of one event, which zero events and two events being quite likely (and zero being quite a bit more likely than two events).
Of course, when you are looking at two separate measurements that have some independence from each other, the probability that one or the other will be a given amount more than the expected value is greater, also reducing their statistical significance. You expect one 2 sigma event for every twenty trials and the more trials you do, the less notable a 2 sigma or 2.5 sigma event viewed in isolation becomes when the total context of experimental outcomes is considered.
Put another way, done right, the naiive 2.5 sigma event is probably very close to and possibly below a 2 sigma event, if the statistics is done right.
Implications For SUSY Parameter Space
Of course, since there are all sorts of SUSY theories, there is no one SUSY prediction for the number of multilepton events, although the LHC results, like those before it, imply that any deviation from the Standard Model expectation in SUSY, if it exists, must be quite subtle.
To be just a bit more refined, I have never heard anyone claim that SUSY theories for a given expected value of a result, have a materially different standard deviation from that expected value. The expected values are different, but not the variability. Therefore, if you are testing two hypotheses, one that the Standard Model is correct, and the other that a particularly SUSY model is correct, SUSY models that predict, for example, six or more multilepton events (rather than the observed two), or five or more Z vetoed multilepton events (rather than the observed zero), are strongly disfavored by the LHC results. This is potentially a pretty meaningful and precise constraint on the parameter space of SUSY models.
This is not the only constraint on SUSY parameter space. The apparent Higgs boson mass of about 125 GeV imposes real constraints, as do exclusion searches setting minimum masses of the lighest supersymmetrical particle which are fast approaching the TeV mass level. We are entering the era where fine tuning is increasingly necessary to fit SUSY theories to the data, and by implication, to fit string theory vacua to the data. LHC's experimental data won't have enough statistical power to entirely rule out SUSY even if every single one of its results is negative for its entire run, but it will impose every tighter constraints of SUSY parameter space that ties the hands of string theorists trying to fit theories to the observed data.
The simpliest versions of SUSY are already ruled out, but human ingenuity's capacity to come up with more clever versions that fit new constraints abounds.
Sunday, January 29, 2012
The M Theory Conspiracy
I've heard less plausible sociology of science arguments.
The Strategic Aspects Of A Research Agenda
From here.
Altaic Populations Linked Via Uniparental DNA To Native Americans
Source: Matthew C. Dulik, Sergey I. Zhadanov, Ludmila P. Osipova, Ayken Askapuli, Lydia Gau, Omer Gokcumen, Samara Rubinstein, and Theodore G. Schurr, "Mitochondrial DNA and Y Chromosome Variation Provides Evidence for a Recent Common Ancestry between Native Americans and Indigenous Altaians", The American Journal of Human Genetics, 26 January 2012 doi:10.1016/j.ajhg.2011.12.014 via Razib Khan at Gene Expression who provides one of the key tables (breaks added in closed access paper abstract above for ease of reading).
The abstract is about as clear as mud in the context of prior research in this area, because when one says "that southern Altaians and Native Americans share a recent common ancestor", it isn't at all clear which Native Americans you are discussing, and methodological issues make the genetic conclusions seen in isolation unconvincing without a rich context to support them.
Wikipedia summarizes the state of the academic literature Y-DNA of indigeneous Americans prior to the latest paper as follows (citations and some headings and portions unrelated to Y-DNA omitted, emphasis added):
Human settlement of the New World occurred in stages from the Bering sea coast line, with an initial layover on Beringia for the small founding population. The micro-satellite diversity and distributions of the Y lineage specific to South America indicates that certain Amerindian populations have been isolated since the initial colonization of the region. The Na-Dené, Inuit and Indigenous Alaskan populations exhibit haplogroup Q (Y-DNA); however, they are distinct from other indigenous Amerindians with various mtDNA and atDNA mutations. This suggests that the peoples who first settled the northern extremes of North America and Greenland derived from later migrant populations than those who penetrated further south in the Americas. Linguists and biologists have reached a similar conclusion based on analysis of Amerindian language groups and ABO blood group system distributions. . . .
Haplogroup Q . . .
Q-M242 (mutational name) is the defining (SNP) of Haplogroup Q (Y-DNA) (phylogenetic name). Within the Q clade, there are 14 haplogroups marked by 17 SNPs. In Eurasia haplogroup Q is found among Siberian populations, such as the modern Chukchi and Koryak peoples. In particular two populations exhibit large concentrations of the Q-M242 mutation, the Kets (93.8%) and the Selkups (66.4%). The Kets are thought to be the only survivors of ancient nomads living in Siberia. Their population size is very small; there are fewer than 1,500 Kets in Russia. The Selkups have a slightly larger population size than the Kets, with approximately 4,250 individuals. 2002 Starting the Paleo-Indians period, a migration to the Americas across the Bering Strait (Beringia), by a small population carrying the Q-M242 mutation took place. A member of this initial population underwent a mutation, which defines its descendant population, known by the Q-M3 (SNP) mutation. These descendants migrated all over the Americas.
Q subclades Q1a3a and Q1a3a1a . . . .
Haplogroup Q1a3a (Y-DNA) and/or Q-M3 is defined by the presence of the rs3894 (M3) (SNP). The Q-M3 mutation is roughly 15,000 years old as the initial migration of Paleo-Indians into the Americas occurred. Q-M3 is the predominant haplotype in the Americas at a rate of 83% in South American populations, 50% in the Na-Dené populations, and in North American Eskimo-Aleut populations at about 46%. With minimal back-migration of Q-M3 in Eurasia, the mutation likely evolved in east-Beringia, or more specifically the Seward Peninsula or western Alaskan interior. The Beringia land mass began submerging, cutting off land routes.
Since the discovery of Q-M3, several subclades of M3-bearing populations have been discovered. An example is in South America, where some populations have a high prevalence of (SNP) M19 which defines subclade Q1a3a1a. M19 has been detected in (59%) of Amazonian Ticuna men and in (10%) of Wayuu men. Subclade M19 appears to be unique to South American Indigenous peoples, arising 5,000 to 10,000 years ago. This suggests that population isolation and perhaps even the establishment of tribal groups began soon after migration into the South American areas.
Haplogroup R1 . . .
Haplogroup R1 (Y-DNA) is the second most predominant Y haplotype found among indigenous Amerindians after Q (Y-DNA). The distribution of R1 is believed to be associated with the re-settlement of Eurasia following the last glacial maximum. One theory put forth is that it entered the Americas with the initial founding population. A second theory is that it was introduced during European colonization. R1 is very common throughout all of Eurasia except East Asia and Southeast Asia. R1 (M137) is found predominantly in North American groups like the Ojibwe (79%), Chipewyan (62%), Seminole (50%), Cherokee (47%), Dogrib (40%) and Papago (38%). The principal-component analysis suggests a close genetic relatedness between some North American Amerindians (the Chipewyan and the Cheyenne) and certain populations of central/southern Siberia (particularly the Kets, Yakuts, Selkups, and Altays), at the resolution of major Y-chromosome haplogroups. This pattern agrees with the distribution of mtDNA haplogroup X, which is found in North America, is absent from eastern Siberia, but is present in the Altais of southern central Siberia.
Haplogroup C3b . . .
Haplogroup C3 (M217, P44) is mainly found in indigenous Siberians, Mongolians and Oceanic populations. Haplogroup C3 is the most widespread and frequently occurring branch of the greater (Y-DNA) haplogroup C. Haplogroup C3 decedent C3b (P39) is commonly found in today's Na-Dené speakers with the highest frequency found among the Athabaskans at 42%. This distinct and isolated branch C3b (P39) includes almost all the Haplogroup C3 Y-chromosomes found among all indigenous peoples of the Americas. The Na-Dené groups are also unusual among indigenous peoples of the Americas in having a relatively high frequency of Q-M242 (25%). This indicates that the Na-Dené migration occurred from the Russian Far East after the initial Paleo-Indian colonization, but prior to modern Inuit, Inupiat and Yupik expansions.
Analysis of Dating
There are a couple of different methodologies for mutation dating. Pedigree mutation rates support a most recent common ancestor between Native American and South Altaic populations ca. 5,170 to 12,760 years ago (95% confidence interval, median 7,740 years ago), and a most recent common ancestor between North Altaic and South Altaic populations that is only a bit more recent (a 95% confidence interval 3,000 to 11,100 years ago, median 5,490 years ago). Evolutionary mutation rates support a most recent common ancestor of North Altaic and South Altaic of a median 21,890 years ago, and a statistically indistinguishable most recent common ancestor of Native Americans and South Altaic of a median 21,960 years ago.
Of course, all of the interesting inferences from the dates flow from the method you use and its accuracy. The evolutionary mutation rates suggest a split just before the Last Glacial Maximum and is suggestive of a scenario in which part of a population retreats to the South from the glaciers, while the other seeks a refuge in Beringia.
The pedigree date (which usually is closer to the historical corrolates that make sense) would be a decent fit for secondary Na-Dene migration wave that is before the original Native American population of the New World, but is before more recent circumpolar population migrations in and after the Bronze Age. The pedigree date also makes the possibility that there is an authentic Na-Dene to Yenesian linguistic link far more plausible than the older evolutionary date. Linguistic connections ought to be impossible to see at a time depth of 21,000 years, but is conceivable with a link that could be less than a third as old.
The "median split time" using pedigree dates is 4,490 years ago for N. Altaians v. S. Altaians, and 4,950 years ago for Native Americans v. S. Altaians, which would coincide with Uralic language expansion and the first of three major waves of Paleoeskimo migration to the New World. The evolutionary dates give a 19,260 years ago "median split time" for N. v. S. Altaians, and 13,420 years ago for Native American v. S. Altaians, an order reversal from all the other dates apparently driven by a wide and old date biased confidence interval. The very old genetic dates don't make a lot of sense, however, given that megafauna extinction dates in Siberia suggest a modern human arrival there ca. 30,000 years ago, plus or minus.
The indications that the northern Altaians are less strongly genetically connected than the southern Altaians to Native Americans is quite surprising. Linguistically and culturally the Northern Altaic populations would seem closer, but those are things that are more succepible to change over time and the southern populations may have faced stronger cultural pressures than the more remote and isolated northern populations.
The reasoning here matters a great deal, of course, and with a closed access paper isn't easy to evaluate. The abstract seems to indicate that the linkages are being based on the phylogeny of non-recombining Y-DNA haplogroup Q (the dominant one in Native Americans) without necessarily relying much on the mtDNA part of the analysis. In particular, it isn't easy to tell from the abstract how succeptible the data are to a multiple wave, as opposed to a single wave migration model.
There are really no sensible models for the arrivals of modern humans in the New World that can fit with a split between Native Americans and Siberian populations any later than 13,000-14,000 for the predominant haplogroups of Y-DNA haplogroup Q found in South America (Q-M3 and its descendants). But, a link of a secondary subtype of Y-DNA haplogroup Q that is pretty much exclusively found in North America at a later date is quite possible to fit consistently with plausible population models. Evolutionary mutation rate dates would strongly disfavor this scenario, but pedigree mutation rate dates could comfortably accomodate this scenario.
Thursday, January 26, 2012
Deep Thoughts From Marco Frasca
Italian physicist Marco Frasca's latest paper leaves his usual comfort zone of QCD and discusses instead a deep and fundamental relationship between the Schrödinger equation (that describes how the quantum state of a physical system changes in time) and the equations of the observable stochastic (i.e. probablistic) processes of Brownian motion (which describes the dispersion of particles that flows from their random movements) from first principles via some mathematical leaps of insight that have eluded some of the best minds in math and physics for almost nine decades (counting from the publication of Schrödinger equation, which is younger than the equations that describe Brownian motion). Essentially the Schrödinger equation is the square root of the equation that describes Brownian motion, when the equations are properly formulated and the square root if defined in a clever way that gives rise to a complex number valued solution.
[T]he square root of Brownian fluctuations of space are responsible for the peculiar behavior observed at quantum level. This kind of stochastic process is a square root of a Brownian motion that boils down to the product of two stochastic processes: A Wiener process [i.e. "the scaling limit of a random walk"] and a Bernoulli process proper to a tossing of a coin. This aspect could be relevant for quantum gravity studies where emergent space-time could be understood once this behavior will be identified in the current scenarios.
Note, however, that there is at least one obvious step that has to be bridged between this conclusion and a rigorous theory of quantum gravity, since the Schrödinger equation is a non-relativistic effective approximation of quantum mechanics.
[T]he solutions to the Schrödinger equation are . . . not Lorentz invariant . . . [and] not consistent with special relativity. . . . Also . . . the Schrödinger equation was constructed from classical energy conservation rather than the relativistic mass–energy relation. . . . Secondly, the equation requires the particles to be the same type, and the number of particles in the system to be constant, since their masses are constants in the equation (kinetic energy terms). This alone means the Schrödinger equation is not compatible with relativity . . . [since quantum mechanics] allows (in high-energy processes) particles of matter to completely transform into energy by particle-antiparticle annihilation, and enough energy can re-create other particle-antiparticle pairs. So the number of particles and types of particles is not necessarily fixed. For all other intrinsic properties of the particles which may enter the potential function, including mass (such as the harmonic oscillator) and charge (such as electrons in atoms), which will also be constants in the equation, the same problem follows.
In order to extend Schrödinger's formalism to include relativity, the physical picture must be transformed. The Klein–Gordon equation and the Dirac equation [which provides a description of elementary spin-½ particles, such as electrons, consistent with both the principles of quantum mechanics and the theory of special relativity] are built from the relativistic mass–energy relation; so as a result these equations are relativistically invariant, and replace the Schrödinger equation in relativistic quantum mechanics. In attempt to extend the scope of these equations further, other relativistic wave equations have developed. By no means is the Schrödinger equation obsolete: it is still in use for both teaching and research - particularly in physical and quantum chemistry to understand the properties of atoms and molecules, but understood to be an approximation to real behaviour of them, for speeds much less than light.
Presumably, however, it would be possible to square the relativistic versions of the Schrödinger equation (such as Dirac's equation) in an analogous manner to derive are diffusion equations for Brownian motion in relativistic settings that is consistent with special relativity, while still illustrating how complex number valued underlying quantum mechanics can reflect an emergent and fluctuating underlying nature of space-time.
As a footnote, it is also worth noting that his paper was made possible only with the a collaboration with elite mathematicians that would have been much more difficult to facilitate without the crowdsourcing of part of the problem that the Internet made possible.
Monroe, Louisiana As The Source Of New World Civilization
A series of articles in Science magazine a month ago (cited below), have transformed my understanding of human civilization in the Pre-Columbian New World. Together with some secondary sources like Wikipedia, the World Almanac 2012, Jared Diamond's "Guns, Germs and Steel", and a few other websites, the implication is that the history of civilization in the New World is far more united in space and over millenia, far greater in scale, and far more sophisticated than I had believed as recently as two months ago.
In a nutshell, which will probably take multiple more lengthy posts to explore in the future, here is the conjectural narrative.
Several sites in the general vicinity of Monroe, Louisiana show the emergence of the earliest large scale mounds and earth platforms, which were part of projects on a scale comparable to the earlier Sumerian pyramids, Egyptian pyramids, Mesoamerican pyramids, and megalithic complexes of Atlantic Europe in the period from about 3700 BCE to 2700 BCE. These appear to have provided these people with a means by which to more easily endure the periodic flooding of the Mississippi River, do not show signs of large trade networks, and while they may show indications of transitional proto-agriculture, do not show signs of a full fledged food production system based upon eating domesticated plants and animals as the entire basis for their diet.
A little more than a thousand years later (flourishing 1600 BCE to 1000 BCE), however, a civilization that appears to be derived from this first wave of mound builders appears at Poverty Point, which is within a day's walk of the earlier sites in Louisiana. This urban center is much larger in scale, perhaps comparable to a medium sized archaic era Greek city state, and shows clear signs of a trade network that extends as far as Milwaukee, Wisconsin in the North and the Ozarks in the West. It used copper and engaged in fine stoneworking. Its trade network may have even extended farther still. The way that its structures are aligned with solstices and equinoxes, its burial practices, its pottery, and the arrangement of structures in the complex, appear to strongly echo and to probably be antecedent to the Mesoamerican civilizations of the Olmecs (from ca. 1200 BCE), the Mayans (from ca. 900 BCE), and the Woodlands Hopi of Ohio (from ca. 400 BCE).
The Inca civilization, as a well organized technological large scale civilization as opposed to merely a gorup of people engaged in disconnected hamlet scale agriculture, fishing, hunting and gathering, appears to derive largely from Mesoamerica. For example, pottery appears in Ecuador ca. 3200 BCE, but does not appear in Peru until around 1800 BCE, and the earliest large scale states emerge in the Inca region ca. 0 CE, about a millenium after the Olmecs and the Mayans. At the point of Columbian contact, there were regular trade and communication relationships between the Aztecs who had consolidated political control of Mesoamerica in a civilization clearly derivative of the Olmec and Mayan civilizations, and the Inca civilization.
Timing and broad outlines of the way that their communities were planned also suggest that a couple of large scale village network societies in the Amazon, ca. 0 CE to 1650 CE, may have been influenced or informed to some extent by the Poverty Point culture or by Mesoamerican societies that were influenced at a formative point by the Poverty Point culture.
From the Hopi woodland culture emerged a sprawling urbanized region in the vicinity of Saint Louis, Missouri, over a region about a day's walk across along one of the main confluences of the Mississipi River basin called Chahokia around 1000 CE. This civilization took a major hit around the 1160s and 1170s during a major New World drought, and eventually collapsed as an urban complex around 1350 CE around the time of the Little Ice Age, although much declined remants of this civilization persisted pocketed throughout its prior range up until the point of European contact at which point European diseases dealt a further blow to what was left of this civilization.
Chahokians worked copper, had fine stonework, constructed gigantic earthworks with invisible interior elements (layers of black earth, white gravel and red earth, inside mounds corresponding more or less to the layers of hell, Earth and heaven in their cosmology) on a scale comparable to the Egyptian pyramid at Giza or the largest Mesoamerican pyramids, although no traces of a written language have been uncovered at this point.
The central complex may have housed 10,000 to 20,000 people, and the larger area may have housed 75,000 people, making the complex a bit larger than the largest urbanized complexes of the Amazon (about 50,000), and in the top ten of Mesoamerican cities at their Pre-Columbian peak (the largest urban area in the Pre-Columbian New World, in the vicinity of what is now Mexico City had about 300,000 people). It was by far the largest urbanized area in what is now the United States and Canada.
The Mississippian culture of which Chahokia was a focal point engaged in maize and pumpkin farming, as well as the farming of a few domesticates or semidomesticates later abandoned as food sources, although they may have significantly supplemented their food sources with hunting, gathering and fishing. At one major feast whose remnants were unearthed by archaeologists at Chahokia, those present dined on about 9,000 deer.
Chahokia's trading network, colonies and strong cultural influences extended throughout the entire Mississippi basin from the Rocky Mountains to the Great Lakes to the Appalacian Mountains and also throughout all or most of the American South where Chahokia's culture overlaps heavily with the Southeastern Cultural Complex. For example, trade brought Chahokia Great White shark teeth from the Atlantic, and minerals from Georgia and Alabama. The mythology and rituals of the Osage Indians correspond closely to the Chahokian ceremonial system that we know from archaeology. Indian tribes that speak the Sixouian languages, of which the Osage language is a part, were spoken in a linguistic area ca. 1500 CE that corresponds closely to the core Chahokian aka Mississippian cultural area. Their "national sport" was a game a bit like Bocci ball in which one threw of ring or disk and then tried to throw your spear as close to that point as possible, which attracted large crowds of spectators, was as popular among average people as softball or soccer, and was the subject of high stakes gambling.
Thus, as recently as two or three hundred years before Columbus arrived in the New World, almost all of the United States to the east of the continental divide and to the west of the Appalacians and the South of the Great Lakes and almost all of the American South were all part of a reasonably united cultural complex that had its most direct common origins (admittedly with a couple of intervening "dark ages") in the vicinity of Monroe, Louisiana around 3700 BCE.
It may not have been one centralized megastate, but it could fairly be compared to the kind of balkanized area with a shared culture found in Europe or in the Indian subcontinent. At a time depth of something on the order of 1600 BCE to 900 BCE, the Poverty Point culture heir in the Monroe, Lousiana area was probably one of the formative cultural contributors (together with important local innovations, particularly with the addition of the domesticated plants to the mix) to the earliest sophisticated civilizations of Mesoamerican and those civilizations, in turn were probably formative cultural contributors to the civilizations of South America in the greater Inca geographic region and in the Amazon. The successors to the Poverty Point culture in North America called the Mississippian culture centered at Chahokia, that may have bloomed via a combination of improved climate conditions, the development of a variety of maize that thrived in the North American climate (derived from the Mesoamerican version which was domesticated somewhere in the vicinity of the Pacific Coast of modern Mexico), and high profile astronomical events like Hailey's Comet and a major supernova, reinvigorated that culture.
It is also hardly a stretch to suppose that the Uto-Aztecian language speaking populations of Northern Mexico and the American Southwest (including the Ute's of Colorado) and the Anastazi (whose civilization collapsed in the megadroughts 1160s and 1170s) probably have their origins in the Aztec civilization of Mesoamerica, which may in turn have a deep time depth connection to Poverty Point, Louisiana.
There is a solid argument supported by strongly suggestive evidence that directly or indirectly, almost all of the civilizated cultures in the Americans trace their roots to a significant extent to an ancient civilization of mound builders ca. 3700 BCE in the vicinity of Monroe, Louisiana.
On the other hand, we also know that the Apache and Navajo Indian tribes of the American Southwest a derived from the Na-Dene people of the Pacific Northwest and arrived in the American Southwest as a result of a migration around 1000 CE.
General Implications
This superculture spanning five millenia and providing a source that had dramatic cultural influences on large swaths of both North American and South America probably did not extent quite everywhere in the New World. The indigeneous cultures to the west of the North American continental divide and to the North of the Great Lakes, and possibly also some in the American Northeast, parts of Florida, and "uncivilized" parts of South American were probably not a part of this superculture.
This also means that the vast majority of people in the New World, at the time of European contact, either were part of a Chalolithic culture, or had ancestors within the last few hundred years who had been, even if their own society had reverted to a hunter-gather mode.
The Viking presence in the New World was contemporaneous with the high point of the Mississippian culture (ca. 1000 CE to 1350 CE), which may explain both why the Vinlanders could not dominant the locals and gain sweeping control of North American the way that the Iberians of half a millenium later would further South, and this small Viking civilization collapsed at about the same time that the Chahokia did for basically the same Little Ice Age climate driven reasons.
This archaeological background also suggests that in addition to the "Guns, Germs and Steel," that Jared Diamond notes, that a critical advantage that the Europeans arriving in the New World, at least in North America, had was timing. They encountered the indigeneous North Americans not at their glorious peak of ca. 1000 CE, but two or three centuries into an era of decline, comparable perhaps to the period from 1200 BCE to 900 BCE, following Bronze Age collapse, or from 476 CE to 776 CE that we call the "dark ages" following the collapse of the Roman Empire. Indigeneous North American civilization has just about hit bottom and not had time to meaningfully recover yet, when it was hit anew first by devistating European diseases, and close on the heels of this devistating series of plagues, by a population with guns, swords, and military history that surpassed that of the Native Americans (inclding the experience of fighting distant foreign wars in the Crusades), a written language, horses and other domesticated animals, long distance sea travel, a somewhat more effective social organization.
If the North American population had managed a few hundred more years to reignite their civilization (and probably adopt the written language of the Mesoamericans in some form), they might have been far better able to hold their own, perhaps even more effectively than the Aztecs and Incas did. Yes, they were behind in a relentless march of progress and faced limitations in their domesticated plant and animal options that the European population that they encountered had not. But, the development and dissemination of a flood of new evidence in the couple of decades since Diamond wrote his book, suggests that the lag was closer to hundreds of year than several millenia as he suggested in his book.
This narrative of the emergence of New World civilization is profoundly more unified and cohesive in time and in space than was previously known. This helps to explain the mystery of why there were so few and such geographically expansive language families in North American where there had not previously been known to be such large scale societies (the advanced Inca and Aztec societies and prior existence of the Mayans and Olmecs made the modest number of languages in the geographically smaller anyway areas of Mesoamerica and Pacific South America explainable long ago). It also provides suggestive evidence regarding what kinds of linguistic relationship between known North American language families might exist at what time depths, so that linguists can know what they should expect to be the most fruitful places to look for genetic linguistic connections between known North American language families. And, these narrative suggests that the process of linguistic consolidation in North American may be more similar to that seen in the Old World and at much shallower time depth in North America, than we have previously believed.
The existence of more advanced than previously known civilizations in the Amazon also helps explain why such a seemingly hunter-gatherer dominated, population fragmenting jungle could possibly have any language families that have as much geographic extent as the ones we have observed do (although, of course, vast numbers of South American languages are unclassified isolates or microlanguage families) and gives us a relatively recent event (linguistically speaking) to explain why their connections can have a relatively shallow time depth relating them to each other. Again, this supports the conclusion that linguistic unity really does flow from the same expanding society with a technological edge process we've seen in the Old World, rather than following some different rule, which makes the inferrence that any unexplained large language families is the product of a lost prehistoric culture that will eventually be discovered stronger.
Implications For Population Genetics
Finally, before I make a final conjecture, this unified narrative has implications for efforts to cast light on prehistoric Native American populations from modern population genetic data. The assumption that a person with Native American genes was representative of a stable genetic population at the place where his or her 16th century Native American ancestors are known to have lived for tens of millenia prior to that point in time is manifestly contrary to what our emerging understanding of the archaeological evidence reveals. We know that there were dramatic ebbs and falls of archaeological cultures in particular regions at least for the past six thousand years or so, that were driven by more than random chance factors governing individual hunter-gatherer tribes in an unstructured way. These cultures were large in extent, wide in geographic distribution, engaged in some documented folk wanderings supported by archaeological and oral historical and early explorer historical evidence, and we now have some generalized context within which to know what direction any influence on 16th century population genetics due to Precolumbian migrations would have flowed if the cultural impacts of the known archaeological cultures had a demic component.
Now, as a matter of practicality, the small founding population of the New World, the limited demic impact of the known later waves of migration from Asia in most of the New World, and the serial founder effects applicable to even broad geographic regions that have been a partial cause of genetic distinctions between Latin American indigeneous peoples and certain groups of North American indigeneous peoples, mean that huge swaths of North American Indians in the geographic range of the Mississippian Superculture and its antecedents may have been so genetically homogeneous after seven thousand or so years of a nomadic hunter-gatherer existence in the region of people all derived from the same small founder population, that any subsequent impacts of demic migration and/or replacement may be virtually invisible at all but the most fine grained levels in modern genetic data.
Also, the existence of the Mississippian superculture with its known ups and downs, materially alters the kind of demographic models that are a plausible fit to reality for North American Indians. The most plausible demographic model given current evidence is that in the post-Clovis, pre-Mound Builder United States that there was a rapid early expansion of perhaps three thousand years or less from a small Beringian founding population that filled the continent at a low hunter-gatherer population density, that there was probably a peak as more effective Clovis hunting methods expanded human populations at the expense of prey populations over a thousand years or so, that there was probably a human population crash over a few centuries immediately after the Clovis era when overhunting and ecological collapse related to overhunting reduced the carrying capacity of the environment using a Clovis culture "business model" and didn't stablize until the surviving Native Americans found a new way to surive in harmony with their megafauna free environment, that a quite low effective population baseline of pure hunter-gatherers (which would be mutationally limited due to a small effective population) whose numbers ebbed and flowed meaningfully with medium and long term climate conditions and prey population health for about seven thousand years (providing lots of occasions for minor bottlenecks that could shed low frequency genetic mutations), and that there was a population expansions attributable to Poverty Point from ca. 1600 BCE to 1000 BCE followed by some degree of populatioon decline followed by gradually rebuilding populations until a much more dramatic population expansion ca. 1000 CE to 1160 CE, followed by a population crash across the New World at that point, followed by gradual recovery or gradual slump in population until about 1350 CE, followed by a Little Ice Age and civilization collapse population slump that is only starting to recover at the point of European contact, at which point there is another well documented slump and massive episode of intra-Native American and Native American-European admixture where historically documents and modern population genetics provide solid estimates of population sizes at given points in time, the impact of deadly diseases and admixture percentages. Both Poverty Point era and the Chahokian era provide particularly likely contexts for unusually high admixture, migration and replacement events. We can produce similar big picture, moderately detailed, archaeologically driven demographic histories using the latest available discoveries in Mesoamerica and in differing parts of South America.
This is obviously a much more complex demographic history than one could produce with a simple back of napkin exponential approximation of the kind very often used in actual published papers on prehistoric population genetics, but now that we know quite a bit about what actually happened, oversimplying that demographic history when we try to extrapolate modern population genetic data to prehistory with implicit assumptions about that demographic history that we know not to be true are inexcusable if we want to have the best possible evidence regarding the population genetics of the Americas in prehistory.
Did Asian Ideas Help Trigger New World Civilization?
While none of my sources mention the possibility, I also offer up one conjecture, which I myself don't actually necessary believe is more likely than not, but which is, given the timing of the events in question a far more plausible possibility than would have been at all supportable a couple of decades ago.
This is the possibility that some of the rise in New World civilization that started to emerge at Poverty Point could have been given a critical boost from exposure to Asian ideas.
The case of a cultural influence from Leif Erikson's 1000 CE on the culture centered around Chahokia is still so devoid of evidence, and even more weak in plausibility, since there don't seem to be any recognizable similarity in kinds of ideas or cultural features that could have been transmitted, even though it is possible that an idea could have been passed from person to person from Vinland to Chahokia, and there would have even been established trade routes in the Great Lake basins and Mississippi River basin which would extend to the Saint Lawrence seaway and Upstate New York, to carry those ideas, in a manner akin to the Eurasian Spice Road. Since, there is some evidence to suggest may have brought some Bronze Age technologies (and even simple versions of Tartan weaving patterns) to Mongolia and China from Europe and West Asia. While it could have happened, it didn't seem to have happened.
But, we know that Bronze Age Asian artifacts made it to Alaska from Asia with Paleoeskimos ca. 2500 BCE, and again with another wave of Dorsett Paleoeskimos ca. 1500 BCE, that there was a Thule (i.e. proto-Inuit) wave of migration that was possible an outgrowth of a culture that was also the source of the Uralic language family around 500 CE, and that there is suggestive evidence for a Na-Dene migration to the Pacific Northwest sometime before 1000 CE, but probably many millenia after the first wave of Native American migration to the New World across the Beriginian land bridge around the time of the last glacial maximum when sea levels were lower. A time span for Na-Dene migration of ca. 4000 BCE to 1500 BCE would have been technologically possible in terms of maritime travel technology, and would have been early enough to allow transmission of Asian ideas (probably with minimal demographic impact, if any) to Poverty Point. All of these populations, unlike Leif Erikson's encounter, were substantial enough to give rise to substantial populations, two of which survive to this day in North America (the Na-Dene and the Inuit), and the other two of which each lasted at least a millenium and has left genetic traces of admixture in some of the surviving Arctic and near Arctic North Americans.
Poverty Point is an almost inevitable early destination for anyone exploring North American via the kind of canoe or kayak that Paleoeskimos and the Na-Dene culture had at their disposal. All one needs to do is put your boat in any navigable tributary in the Mississippi River basin that makes up a large share of the entire North American continent and eventually the river will take you there without even having to hazard all that many impassable rapids - these Native American explores lacked nothing that young Huckleberry Finn had. And, a wealth of historical and prehistorical evidence tend to show that exploration and migration frequently run up and down major river systems, be they the Nile, the Danube, the Tigress and Euphrates, the Indus, the Ganges, the Yellow or the Yangtze. Sooner or later, some representative of any exploring new civilization was likely to end up on their shores and carry with him the stories of his travels.
All four of the likely pre-Columbian, post-Clovis waves of migration of new people's to North America were very likely to have happened at a time when someone in Northeastern Siberia who was at least advanced technologically enough to have a boat that could get him to North America from there was likely to be aware to some extent of some of the technological innovations that had taken place in the North Chinese Neolithic of ca. 7,000 BCE - 8,000 BCE that hadn't existed with North American was originally settled by modern humans. Someone even modestly familiar with the ideas associated with that Neolithic cultural complex (or perhaps even a Chalcolithic or Bronze Age cultural complex in North China), while they wouldn't have been able to reproduce the North Chinese civilization in full (just as few people and perhaps no one knows enough individually to reproduce modern American civilization in its entirety), could have provided enough ideas to set the people of Poverty Point on the track towards developing a semi-urbanized, food producing, copper age, stone carving civilization.
As I explained at the start of this conjecture, I'm merely noting that this kind of chain of culture influence is possible, even a plausible possibility that isn't obviously contradicted by what we already know, without actually claiming that it actually happened.
But, the mere possibility that the rise of civilization in the New World might not have been the completely independent innovation that it is widely credited with having been is so paradigm shifting in our understanding of prehistory, and motivated by fact that could not have been known by people investigating this possibility even a couple of decades ago, that it bears further investigation.
Sources (an incomplete list)
Andrew Lawler, "America's Lost City", 334 Science 23 December 2011: 1618-1623 (DOI: 10.1126/science.334.6063.1618).
Andrew Lawler, "Preserving History, One Hill at a Time", 334 Science 23 December 2011: 1623.
Andrew Lawler, "Does North America Hold the Roots of Mesoamerican Civilization?", 334 Science 23 December 2011: 1620-1621.
Jared Diamond, "Guns, Germs and Steel" (1997).
Sunday, January 22, 2012
Archaeology Broadway Style
The linked post has a You Tube video that is the most epic broadway style rock anthem testament to nerdiness (in this case, the kind that is at the root of archaeology), since the musical "Chess" (which was itself composed by a Swedish ex-Abba member). The post is not in English, but the video (which is not entirely safe for work), by a Norwegian singer who has a show a bit like Saturday Night live in Norway, is in English.
Quantum Field Theories Defined
Lubos helpfully defines various terms which include the words "quantum field theory."
Thursday, January 19, 2012
Woit On Symmetry In Physics
Peter Woit has a really worthwhile answer to this year's Edge Website question of the year, which is "What is your favorite deep, elegant, or beautiful explanation?" He says:
1. The symmetry of time translation gives energy
2. The symmetries of spatial translation give momentum
3. Rotational symmetry gives angular momentum
4. Phase transformation symmetry gives charge
Most of this is familiar to me, but I had not lodged in my head the deep connection between the notion of energy and the notion of time translation.
More Higgs Boson Mass Numerology
There are some constants other than the Higgs boson mass in the Standard Model that have been measured that can be combined in very simple formulas to give numbers that are within the margin of error of current Higgs boson mass estimates.
One is that experimental indications for the Higgs boson mass in the vicinity of 123-125 GeV are remarkably close to precisely one half of the Higgs field vaccum expectation value of 246 Gev. The other is that experimental indications for the Higgs boson mass are remarkably close to precisely half of the sum of the masses of the W+ boson, the W- boson and the Z boson (or alternatively, the sum of the masses of the W+ boson, the W- boson, the Z boson and the photon, since the photon mass is zero; or alternatively, the sum of the masses of all of the fundamental fermions, since the gluon rest mass is also zero). The sum of these masses is about 250.3 GeV, half of which would be about 125.15 GeV.
One could get an intermediate value by adding the sum of the relevant boson masses to the Higgs field vev and dividing by four (a result suggestive of a linear combination of the four electroweak bosons, the W+, W-, Z and photon).
Another way to bridge the gap would be to use the "pole masses" of the W and Z bosons, rather than their conventional directly measured masses. This basically adjusts the masses of unstable fundamental particles downward by a factor related to their propensity to decay in an amount proportional to half of their decay width, which at a leading order approximation is about 1.8 for these bosons. This would give us a sum of the three pole masses which is equal to roughly 248 GeV, which would be a fit to a 124 GeV Higgs boson pole mass if there is such a simple relationship, although the match would presumably have to be to the pole mass of the Higgs boson (something not yet possible to estimate with any meaningful accuracy as we have considerable uncertainty in both the Higgs boson mass and its decay width).
These pole mass calculations are approximate, and an exact calculation has significant terms at two loop level and probably beyond, so coming up with an exact pole mass calculation figure is a bear of a calculation. But, the notion that the Higgs field vacuum expectation value might be equal to the double the Higgs boson pole mass, and to exactly the W+ boson pole mass, the W- boson pole mass, the Z boson pole mass, and the possibly the (zero) masses of the photon and/or the eight gluons, is an attractive one. It is also suggestive of the idea that the Higgs boson itself might be understood as a linear combination of four spin one electroweak bosons to get a Higgs boson, whose pairs of opposite sign spins combine to produce an aggregate combined spin of zero, in line with the scalar character of the Higgs boson. One would need some reason to come up with a factor of two, or alternatively, some reason to add in the Higgs vev to the sum of the four electroweak boson masses which would naturally be divided by four since it is derived from a linear combination of four bosons.
The Z boson mass is related exactly to the W boson mass by the Weinberg angle aka weak mixing angle (an angle, for which the sin squared is about 0.25, which Rivero has some interesting numerological speculations in a 2005 paper and whose possible origins are discussed in terms of a heuristic set forth in a 1999 article). So, if there is some simple relationship between the Higgs boson mass, the Higgs field vacuum expectation value, the W boson mass and the Z boson mass, such that the Higgs field vacuum expectation value and Higgs boson mass can be calculated from the W boson and Z boson masses, it ought to be possible to derive all of these constants exactly, in principle at least, from the W boson mass and the Weinberg angle (which is definitionally derived from the relationship between the weak and electromagnetic gauge couplings).
Of course, even if the experimental values come to be known with sufficient precision to rule out such simple relationships, one is still left with the question: why should this simple relationship be such a near miss? For example, does the simple relationship capture the first order terms of an exact relationship whose next to leading order and beyond terms are unknown?
This all becomes even more exciting if one can come up with a generalization of the Koide formula to account for all of the charged fermion masses from just a couple of constants. Both the W boson mass and Z boson mass were predicted from other constants known at the time before they were discovered, and one could conceivably get the number of free constants in the Standard Model down to a much smaller number.
While not precisely on point, this is also as good a point as any to ask, why have string theory and SUSY so utterly failed to provide accurate predictions of the mass constants or mixing matrix values in the Standard Model? Isn't that the very least that we should expect of any purported grand unification or theory of everything scheme?
Monday, January 16, 2012
Picture Of Lost Civilization In The Amazon Emerging
The Amazon is home to more groups of uncontacted hunter-gatherers than anyplace else in the world, with the possible exception of Papua New Guinea. But, it isn't generally known as a center of pre-Columbian advanced civilizations comparable to that of the Aztecs of Mesoamerica and the Incas of the Pacific Coast of South America. The only real traces found among contemporary Amazonians of a possible lost civilization are a few legends and some very geographically broad linguistic groupings that don't fit the usual geographically confined hunter-gatherer mold.
But, new pieces of evidence increasingly show signs of a civilization that did greatly modify its environment in the Amazon.
Alceu Ranzi, a Brazilian scholar who helped discover the squares, octagons, circles, rectangles and ovals that make up the land carvings, said these geoglyphs found on deforested land were as significant as the famous Nazca lines, the enigmatic animal symbols visible from the air in southern Peru. . . . parts of the Amazon may have been home for centuries to large populations numbering well into the thousands and living in dozens of towns connected by road networks, explains the American writer Charles C. Mann. In fact, according to Mr. Mann, the British explorer Percy Fawcett vanished on his 1925 quest to find the lost “City of Z” in the Xingu, one area with such urban settlements. . . . So far, 290 such earthworks have been found in Acre, along with about 70 others in Bolivia and 30 in the Brazilian states of Amazonas and Rondônia.
Researchers first viewed the geoglyphs in the 1970s, after Brazil’s military dictatorship encouraged settlers to move to Acre and other parts of the Amazon, using the nationalist slogan “occupy to avoid surrendering” to justify the settlement that resulted in deforestation.
But little scientific attention was paid to the discovery until Mr. Ranzi, the Brazilian scientist, began his surveys in the late 1990s, and Brazilian, Finnish and American researchers began finding more geoglyphs by using high-resolution satellite imagery and small planes to fly over the Amazon.
Denise Schaan, an archaeologist at the Federal University of Pará in Brazil who now leads research on the geoglyphs, said radiocarbon testing indicated that they were built 1,000 to 2,000 years ago, and might have been rebuilt several times during that period. Researchers now believe that the geoglyphs may have held ceremonial importance, similar, perhaps, to the medieval cathedrals in Europe. This spiritual role, said William Balée, an anthropologist at Tulane University, could have been one that involved “geometry and gigantism.”
In 2008, National Geographic reported on a somewhat similarly developed civilization in a part of the Amazon remote from these geoglyphs.
Dozens of ancient, densely packed, towns, villages, and hamlets arranged in an organized pattern have been mapped in the Brazilian Amazon. . . . In 1993, Heckenberger lived with the Kuikuro near the headwaters of the Xingu River. Within two weeks of his stay, he learned about the ancient settlements and began a 15-year effort to study and map them in detail.
So far he has identified at least two major clusters—or polities—of towns, villages, and hamlets. Each cluster contains a central seat of ritualistic power with wide roads radiating out to other communities.
Each settlement is organized around a central plaza and linked to others via precisely placed roads. In their heyday, some of the settlements were home to perhaps thousands of people and were about 150 acres (61 hectares) in size.
A major road aligned with the summer solstice intersects each central plaza.
The larger towns, placed at cardinal points from the central seat of power, were walled much like a medieval town, noted Heckenberger. Smaller villages and hamlets were less well defined.
Between the settlements, which today are almost completely overgrown, was a patchwork of agricultural fields for crops such as manioc along with dams and ponds likely used for fish farms.
"The whole landscape is almost like a latticework, the way it is gridded off," Heckenberger said. "The individual centers themselves are much less constructed. It is more patterned at the regional level."
At their height between A.D. 1250 and 1650, the clusters may have housed around 50,000 people, the scientists noted.
According to Heckenberger, the planned structure of these settlements is indicative of the regional planning and political organization that are hallmarks of urban society.
"These are far more planned at the regional level than your average medieval town," he said, noting that rural landscapes in medieval settlements were randomly oriented.
"Here things are oriented at the same angles and distances across the entire landscape."
Charles C. Mann, in his book 1491, argued that these civilizations collapsed because they came into contact with old world diseases despite limited direct contact with Europeans, and that there was a rewilding of the Americans in response to this population collapse.
This is a possibility that shouldn't be ruled out. But, I'm not necessarily sold on that as the only possible cause, because we have other examples of relatively advanced societies like the irrigation agriculture based societies of the Four Corners area of Colorado that rose and fell due to climate conditions in the Pre-Columbian era, and civilizations like the Mayans and Olmecs that preceded the Aztecs that were interrupted by successor civilizations that were more successful. We have have old world examples like the Harappans of the Indus River Valley and the Western Roman Empire, who apparently managed to experience the collapse of their societies without the assistance of an influx of superlethal Old World diseases.
Still, clearly these civilization did collapse, and clearly they did have some level of urban organization and agriculture in the pre-Columbian era in the Amazon.
Lubos v. Koide
Lubos, at his blog, makes the case for Koide's formula for the lepton masses, which has been expanded by later investigators, being mere numerology. While he is pretty out of the mainstream when it comes to climate change and cultral sensitivity, he is quite mainstream and credible in his specialty of theoretical physics from a SUSY/String Theory perspective and his post makes the argument against Koide's formula being deeply significant about as well as it is possible to do so in a blog post sized exposition. His argument is not a straw man and deserves serious consideration.
Koide's Formula, recall, is the observation that the sum of the masses of the three charged leptons, divided by the square of the sum of the positive square roots of the charged leptons, is equal to two-thirds. It has been confirmed to five signficant digits which is consistent with experimental evidence, predicts a tau mass to a couple of significant digits more than the currently most precise value, has held up even though it was quite a bit off the most precise values at the time it was formulated several decades ago, and is interestingly exactly at the midpoint of the highest possible value for that ratio (1) and the lowest possible value for that ratio (1/3).
His main points are as follows:
(1) It is much easier to find approximate, but surprisingly close, mathematical coincidences than you would think but manipulating a handful of constants in every conceivable way.
(2) Since the formulation is dimensionless, it is actually a function of two lepton mass ratios, rather than three independent mass values, which makes it somewhat less remarkable.
(3) If the ratio 2/3rd is conceptualized as a 45 degree angle, rather than a ratio, it is not at the midpoint of the possible values, making it less special.
(4) Koide's formula uses the real valued numbers for charged lepton masses, rather than the complex valued charged lepton masses, called "pole masses" that include an adjustment for the decay width of unstable particles (basically, their half lives, converted into mass units), and when Koide's formula is applied to pole masses, the 0.79 ratio that results don't seem as special. Lubos thinks it is unnatural to use something other than pole masses in anything that expresses a fundamental relationship of charge lepton masses.
In the Standard Model, the masses of charged leptons arise from the Yukawa interaction term in the Lagrangian, [which is a simple function of] y . . . a dimensionless (in d=4 and classically) coupling constant; h . . . the real Higgs field; [and] Ψ,Ψ ¯ . . . the Dirac field describing the charged lepton or its complex conjugate, respectively. To preserve the electroweak symmetry – which is needed for a peaceful behavior of the W-bosons and Z-bosons – one can't just add the electron or muon or tau mass by hand. After all, the electroweak symmetry says that the left-handed electron is fundamentally the same particle as the electron neutrino. Instead, we must add the Yukawa cubic vertex – with two fermionic external lines and one Higgs external line – and hope that Mr Higgs or Ms God will break the electroweak symmetry which also means that he will break the symmetry between electrons and their neutrinos. . . . [In turn] In the vacuum, the Higgs field may be written as h=v+Δh. Here, v is a purely numerical (c -number-valued) dimensionful constant whose value 246 GeV was known before we knew that the Higgs boson mass is 125 GeV. The value of v is related to the W-boson and Z-boson masses and other things that were measured a long time ago. The term Δh contains the rest of the dynamical Higgs field (which is operator-valued) but its expectation value is already zero. . . . [And,] m e is just a shortcut for m e =y e v where the Yukawa coupling y e for the electron and the Higgs vev v= 246 GeV are more fundamental than m e. If you write the masses in this way, v will simply cancel and you get the same formula for Q where m is replaced by y everywhere. However, this is not quite accurate because the physical masses are equal to yv up to the leading order (tree level diagrams i.e. classical physics) only. There are (quantum) loop corrections and many other corrections. Moreover, the values of y that produce Q=2/3 are the low-energy values of the Yukawa couplings. Even though the Yukawa couplings are more fundamental than the masses themselves, their low-energy values are less fundamental than some other values, their high-energy values.
In other words, both arguments (4) and (5) are arguments that in the ordinary formulation of the Standard Model, the charged lepton mass inputs into Koide's formula are not fundamental and therefore have no business exhibiting profound and mysterious relationships to each other that have any basis in fundamental physics and hence are probably just numerological coincidences.
I'm not sold on the argument Lubos makes, for a few reasons, that I'll note with Roman numerals to avoid confusion with his reasons:
(I) Koide's formula has come into closer alignment with the experimentally measured values of the charged lepton values as they have been discovered more precisely, while most numerical coincidences (e.g. efforts to describe the electromagnetic coupling constant as a simple integer valued number) fall apart as the experimentally valued number becomes known with more precision. A five significant digit match to a simple, dimensionless, rational number shouldn't be dismissed lightly.
(II) Lots of well motivated Standard Model derived constant predictions (e.g. the W and Z masses, the proton and neutron masses) are not know to any more precision than Koide's formula, so judged by its fit to empirical evidence, Koide's formula is holding its own.
(III) Almost everyone who understands particle physics well enough to be a professional in the field intuitively agrees that the many constants of the Standard Model are not simply random and have deeper interrelationships to each other than we have yet come up with well formulates laws of physics to explicate. Put another way, there is clearly some formula out there that if discovered would derive particle masses, particle decay widths, CKM/PMNS matrix phases, coupling constants of the fundamental forces, and the constants of the running of the fundamental force coupling constants, from a much smaller set of more fundamental constants, and there is no a priori reason that we aren't capable of discovering that relationship.
If you start from the assumption that there is some deeper relationship between these constants, then the question is which are the proposed relationships between these constants has proven most fruitful so far and has tended to become more rather than less accurate as more empirical evidence has become available.
Put another way, if you assume that these constants do have a deeper relationship, then any other empircially relationship between them that is observed necessarily derives in some way from the deep relationship and hints at its nature. The empirical validity of the dimensionless Koide's formula to great precision, at the very least, is proof of a no go theorem for other proposed deeper relationship between charged lepton masses that does not observe that relationship. It fairly tightly constrains the universe of potentially valid deeper theories.
At the very least, Koide's formula poses an unsolved problem in physics akin to the Strong CP problem, i.e. "why is no there observable CP violation in the physics of the strong force?"
In the same vein, the phenomenological and predictive success of the modified gravity theory "MOND" as originally formulated by Milgrom in describing galactic rotation curves with a single numerical constant, doesn't necessarily mean that this phenomena is really caused by the law of gravity being misformulated rather than dark matter. But, it also necessarily implies that any dark matter theory that takes multiple unconstrained numerical constants to produce results that MOND can match with one numerical constant with similar accuracy is missing some very important factors that cause real galaxies to have far more tightly constrained structures than their formulation permits. The fact that a strong phenomenological relationship exists doesn't tell you its cause, but it does generally establish that there is some cause for it.
(IV) Lots of phenomenological relationships in physics that aren't fundamental at the deepest sense and can be derived from mere approximations of theoretical physics formulas which are known to be more accurate are still remarkably accurate and simple in practice.
For example, the phenomenological fact that planets follow orbits around the sun that are ellipses with foci at the planet in question and the sun, turns out to be extremely accurate, and possible to express with high school algebra and derive with elementary first year calculus, even though it ignores all sorts of more accurate physics such as the corrections between general relativity and Newtonian gravity for objects that are in motion, and the fact that planetary orbits are actually determined via supremely difficult to calculate many bodied problems that include the gravitational effects of every little bit of matter in the solar system and beyond, not just a two body problem and a formula in the form F=GmM/r^2. Before Kepler figured out that the orbits were ellipses, Copernicus came up with a simpler approximation of the orbits as spheres around the sun (which are degenerate forms of the equations for ellipses), which while also wrong, was still an immense leap relative to the prior formulations.
Similarly, the classical ideal gas law, PV=NkT, involves physics that aren't fundamental (it can be derived from first principles from statistical mechanics and a few simplifying assumptions, and statistical mechanics, in turn, relies on classical mechanics that have to be derived at a fundamental level in far from obvious way from quantum mechanics). Yet, we still teach high school and lower division physics and chemistry students the ideal gas law because it, and non-ideal gas variants of it that use empirically determined physical constants to fit real gases, turn out to be useful ways to develop quantitative intuition about how gases behave and approximate that behavior with accuracy sufficient for a wealth of applications. The ideal gas law, in turn, was derived from even simpler observations about two variable proportionality or inverse proportionality relationships (e.g. V=cT for a gas of a constant volume) that were observed phenomenologically, long before all of the pieces were put together.
Thus, the fact that Koide's formula doesn't naturally and obviously correspond in form to current physically well motivated electroweak unification models doesn't necessarily count as a strike against it. It may be that the terms in more complex formulations of fundamental sources of charged lepton masses, either cancel out or have insignificant physical values that are swamped by other terms. For example, I suspect that a more exact formulation of Koide's formula for leptons may require the inclusion of all six lepton masses. But, the neutrino masses are so negligible relative to the charged lepton masses that their impact on Koide's formula may be invisible at the currently level of precision with which we know the charged lepton masses.
Odds are that a some level of precision, Koide's formula will cease to hold. But, for example, if the amount by which it is off is at an order of magnitude that could be accounted for via the inclusion of neutrino masses and tweaking the sign of electron neutrino mass term (a Brannen suggested possibility), then Koide's formula starts looking like an incomplete approximation of more exact theory that holds for reasons considerably deeper than coincidence, rather than merely a fluke.
(V) A notion of what is fundamental and what is derived, with a set of constants that are all tightly constrained to be related to each other mathematically, is to some extent a matter of perception. The notion that inferred Yukawa coupling constants, or pole masses of particles must be more fundamental than observed particle rest masses without adjustment for rates of decay, is not at all obvious. There is nothing illogical or irrational about described Yukawa coupling constants and pole masses as derived values, and charged lepton rest masses as fundamental.
My strong suspicion, for example, given the strong patterns that are observed in decay widths, is that the decay width of a particle is a derived constant that is the product in some manner or other of rest masses and some other quantum numbers, rather than having a truly independent value for each particle. Pole mass may be a more useful description of a particle's mass in some equations, just as a wind-chill adjusted temperature may be a more useful description of the ambient temperature in a location for some purposes. But, that doesn't necessarily mean that it is truly more fundamental.
And, reliance upon Standard Model formulations of particle masses as a source of the "true" nature of particle mass is also questionable when one of the deepest problems of the Standard Model is that its formulations can't provide particle masses from first principles for fermions or the Higgs boson (although the photon, W and Z boson rest masses can be derived from it).
(VI) Lubos ignores the relatively productive recent efforts that have been made recently to express other Standard Model particle mases (where the true values are often known to just one or two signficant digits) in Koide-like triples or other Koide-like formulations, an apparent three to one relationship between Koide-like formulations for quarks and for leptons (that fits the three to one relationship betweeen quarks and leptons seen in precision electroweak decay products), possible derivations of fermion mass relationships from CKM/PMNS matrix elements and visa versa (often finding links via the square root of fermion masses to be more natural), the phenomenological observation of quark-lepton complementarity in CKM/PMNS matrix elements, and so on. If there was just one Koide triple in the fermion mass matrix, it might just be a fluke. When there are multiple Koide triples in the fermion mass matrix that all seem to take on some kind of integer value to well within the range of empirically measured masses, dismissing the result as a fluke is problematic.
The implied angle of forty-five degrees from Koide's formula, for example, also comes up in quark-lepton complementarity, which relates to CKM/PMNS matrix element relationships.
(VII) Lubos also puts on blinders to the potential relevance of the square root of fermion mass as a potentially fundamental matter having some relationship to emerging evidence in his own string theoretic field of the similarity between gravity (which is a force that acts on mass-energy) and a squared QCD type gauge group, in which color charge is replaced with kinematic terms.
Thursday, January 12, 2012
Dim Matter Strikes Again
More accurate observations of globular clusters have turned up low luminosity stars, with masses of about 0.18 times that of the sun, that account for a large proportion of the globular cluster's previously estimated dark matter which was based on brighter observed stars and lensing observations for the entire cluster.
These clusters are considerably more rich in dark matter than individual galaxies and the phenomenological predictions of modified gravity theories derived from an early version called MOND consistently underestimate the amount of dark matter in these clusters. But, this new result finding that there is considerable luminous ordinary matter in these clusters that had not previously been seen by astronomers because their instruments weren't powerful enough to see them suggests that MOND's shortcoming in its estimates of the magnitude of effects due to something other than Newtonian gravity acting on observable luminous matter may be much smaller than previously believed.
This result should be considered together observations in late 2010 that revealed that the amount of low luminosity ordinary matter in elliptical galaxies had been grossly underestimated. The more accurate ellipical galaxy census suggested that the true amount of dark matter in the universe due to that revision in the estimate amount of normal matter in ellipical galaxies alone was closer to 50% of all ordinary and dark matter combined, rather than the frequently quoted 80% figure.
Other recent theoretical studies have shown that some portion of the effects attributed to dark matter in spinning galaxies is actually attributable to general relativistic corrections to models that estimate the effects of gravity with a Newtonian approximation, although different theorists have reached dramatically different estimates of the magnitude of these effects by using different methods to simplify the description of a spinning galaxy to which the equations of general relativity are applied.
This new result on globular clusters, combined with the prior work on dim matter in ellipical galaxies and general relativistic effects, suggests that the actual percentage of matter in the universe which is dark matter may be considerably less than 50%. Dark matter may actually end up being one third or less of all of the matter in the universe.
If one takes the position that a cosmological constant is a perfectly acceptable and respectable alternative to a hypothesis that 80% of the universe is made out of "dark energy" observed in no other way, and that it represents a property of space-time itself, and that the actual proportion of matter which is dark is much smaller than previous estimates, then dark matter candidates like neutrinos (perhaps in condensate form) that don't require the discovery of new fundamental particles, begins to look more plausible.
Pinning Down Archaic Admixture Population Models
There are many outstanding disputes, critical to understand the demographic history of Eurasian modern humans in the Upper Paleolithic era, related to the population models that are used to describe how Neanderthal genes could have ended up in almost all modern Eurasians at frequencies on the order of 2%-4% of our autosomal genome in a sample made up of many thousands or tens of thousands of individuals, despite a complete absence of Neanderthal mtDNA or Y-DNA in any genetically tested modern human, from a large sample of tens or hundreds of thousands of individuals, including hundreds of ancient DNA samples. This population genetic data has been accumulated in a collective scientific enterprise that has deliberately oversampled populations that are likely to be genetically diverse outliers in both data sets, although there are far more outlier populations and ancient DNA populations that are undersampled for autosomal genetics than there are that have been undersampled for mtDNA.
One of the confounds in estimating what kind of process gave rise to the introgression of Neanderthal DNA into modern humans is the question of how much of the Neanderthal DNA originally present in hybrid individuals has been purged over time from modern humans, either due to random genetic drift in admixed modern human populations, or due to selective disadvantage associated with particular Neanderthal genes.
It helps, in comparing possibilities that we have significant shares of the Neanderthal genome from ancient DNA to compare against modern genomes.
Neanderthal genes that could have introgressed into modern humans can be broken into one of four categories: (1) genes in which the Neanderthal genome and modern human genome are indistiguishable (which is a very substantial share of the total, probably on the order of 95% or more), (2) Neanderthal genes with a positive selective advantage (there is some early indication that this may mostly consist of HLA genes which are related to the immune system, (3) Neanderthal genes that have a selective disadvantage relative to modern human genes, which statistically should have been removed from the human genome over the relevant time spam of at least 30,000 years or so, and quite possible two to four times as long as that, even if the selective disadvantage is very modest, particularly as disadvantageous genes slowly become separated from nearby genes that may have selective advantage through the recombination process over many generations, and (4) Neanderthal genes that are selectively neutral.
One can determine in modern populations which Nenderthal genes are present at elevated frequencies indicative of selective advantage and which are only present at a baseline level, in order to both estimate the true selectively neutral baseline level of admixture before selection started to act on the genes in modern humans with Neanderthal ancestry, and to estimate the magnitude of the advantage associated with those genes present at elevated frequency. This task is somewhat harder than it seems because one has to address statistical noise that elevates the frequency of some random genes for reasons unrelated to selective advantage, but is well within the capabilities of well established statistical methods.
One can also search, by direct comparison, for distinguishably Neanderthal genes that have not ended up in any modern human at all. There are basically three ways that this could happen: (1) the genes were never transferred in an admixture event because there were a finite number of admixture events and only an approximately random half of the Neanderthal genome was transferred in each event, so some genes may never have been transferred in any of the events, (2) the genes were transferred in an admixture event and left the modern human genome via random genetic drift, (3) the genes were transferred in an admixture event but due to selective disadvantage associated with the genes, they were culled from the modern human genome. The percentage of Neanderthal specific genes known to exist which are found in no modern human populations can provide a very accurate estimate of the combined impact of these three factors, although by itself, it doesn't do much to tell you how much of each factor plays a part.
It is mathematically trivial to relate the impact of the first factor to the number of admixture events that took place, and the relationship between the percentage of genes never transferred in admixture events and the number of admixture events is highly non-linear. For one admixture event, the percentage of 50%. For two it is 25%. In general, the never transmitted proportion of the genome is 1/(2^n) where n is the number of admixture events. In any scenario where there are seven or more admixture events in all of human history, the percentage of Neanderthal specific genes never transmitted in admixture events is below 1% and at somewhere on the order of twelve to fourteen admixture events ever in all of modern human history, the impact of this factor would be completely undetectable with any level of statistical significance in an autosomal genome data set as large as the one that is currently in existence.
If the effective population size of the modern human populations that admixed with Neanderthals was on the order of four hundred to seven hundred and fifty individuals, the effect of non-transmission of specific genes in any admixture event should be negligable, and even at an effective population size as low as two hundred, the impact of this factor should be a very small proportion of the total number of Neanderthal genes not observed in any modern human population. Yet, most estimates of the effective population size of the founder population of modern human Eurasians are at least in the single digit thousands, and archaic admixture itself, while it would inflate the apparent effective population size of the founder population of modern human Eurasians, at the 2.5%-4% of the total population size would not have an effect so significant that it would bring the effective population size of the founding population of modern human Eurasians to the low three digits, particularly to the extent that the estimates are corroborated by mtDNA and Y-DNA based estimates that have on archaic component.
This means that essentially all of the "missing" Neanderthal DNA (at least outside the sex chromosomes where there are clearly population structure and demographic history factors that are non-random at play) must statistically derive from either genetic drift or selective disadvantage.
We can then work to estimate both components separately using a variety of population genetic parameters, and work to look at the parameter space of assumptions that can produce outcomes consistent with the percentage of missing Neanderthal DNA that we observe.
Random drift of selectively neutral genes is easy to model with very accurate results using just a handful of parameters, either analytically, or numerically with Monte Carlo methods. Some of the key parameters are generation length, effective modern human population size at the time of admixture, number of admixture events, spacing of admixture events, boom and bust variability in effective modern human population size, and population growth (which can be quite accurately estimated in the long run from a variety of evidence, even if fine grained variability in this rate is hard to determine).
For populations that experience growth in the long run (as modern humans in Eurasia obvious did), where the number of generations is very large, it turns out that generation length doesn't actually matter very much, because when you have a number of generations in excess of one thousand with a population that reaches the many millions sometime in the Upper Paleolithic, and an overall percentage of admixture that is at least on the order of the 2.5%-4% it has reached at long term fixation, which has apparently been reached for all Eurasian given the supercontinental uniformity present in that percentage, the amount of genomic loss that takes place due to random drift bceomes insensitive to the number of generations because random drift is much more powerful an effect, in a non-linear manner, when populations are small. At a leading order estimate, the likelihood of losing a gene entirely from a population in any given span of generations is a non-linear function of the absolute number of individuals in the population who carry that gene. Basically, the percentage likelihood that a gene will leave the population by random drift is roughly proportional to the probability that a random sample from the effective population equal to the absolute number of gene carriers in the population would be zero. Once the absolute number of carriers and zero is several sample error standard deviations apart from a sample of that size, the probability of loss of a gene entirely due to random drift approachees zero.
Complicating this is a factor that also looks like random drift, which is mutation. While not listed as a separate factor, another way that a gene can be removed from the gene pool is through a mutation at that locus. The probability of this happening is a function of the number of generations involved and the effective population size of each generation, divided by the number of carriers of a particular gene, and discounted for the fact that lots of mutations are lethal and never enter the gene pool. This is the method used to make estimates of the age of mtDNA and Y-DNA haplogroups and it isn't very accurate, but there is a considerable body of empirical evidence that put order of magnitude bounds on the size of this effect. So, whlle the error bars on this component of the random loss of selectively neutral genes from the population might have extremes that vary by as much as a factor of two to ten if we were really being realistic about who precise our methods of mutation dating have proven to be in practice (perhaps more, given that the timing of the admixture event has something on the order of a factor of two uncertainty in it to begin with and that our estimates of generation length in Upper Paleolithic modern humans aren't terribly accurate and our effective population chart also has a pretty fuzzy line), if the effect is at an order of magnitude lower than other sources of removals of genes from the population's genome, we can safely ignore it, even if the precise magnitude of the effect is not known with a great deal of certainty.
From the other direction, there have been a number of reasonably useful estimates of the proportion of genes in the human genome, and the proportion of genes in a variety of other species, which do, or do not, show indications of having a selective effect at any given time (which basically consists of genes that have not reached fixation in the species for which there is no good reason to believe that selection produces multiple varieties in stable proportions as it does for HLA genes). In general, these studies have shown that the proportion of genes that are currently experiencing active selective pressures at any given time appear to be fairly modest, but not negligable either.
There is no really good way to estimate the relative numbers of selectively advantageous archaic genes to selectively disadvantageous archaic genes. There argument for more good genes than bad is that Neanderthals had more time to adapt to the new environment that modern humans were entering. The argument for more bad genes than good is that Neanderthals went extinct while modern humans didn't, so overall, modern humans had a selective advantage of some form over Neanderthals. But, it isn't unreasonable to infer that there should be an order of magnitude similar number of each. There is also no particularly good reason to think that the proportion of the genome that is selectively neutral at any given point in time has changed very much or was much different from Neanderthals than it is for modern humans. So, an examination of the number of Neanderthal genome genes present in elevated levels that hence show signs of selective advantage could cast some light, at least, on the proportion of Neanderthal genes that gave rise to selective disadvantages and were purged from the modern human genome. The early indications from this kind of analysis are that the proportion of Neanderthal genes still in the modern human genome which show signs of having been positively selected for is small relative to the total number of Neanderthal genes in the modern human genome.
Despite the fuzziness of all of these reasoning, from a quantitative perspective, the bottom line in all of this analysis is that we would expect a significantly disproportionate share of the proportion of missing genes from Neanderthal genome to have been lost due to selectively neutral random drift rather than natural selection, and that even this crude bound allows us to make fairly specific numerical estimates of the proportion of Neanderthal specific genes that were lost because they were selectively disadvantageous and the proportion of Neanderthal specific genes that were lost due to one of a couple of forms of random genetic drift.
Placing numerical bounds and maximum likelihood estimates on the proportion of Neanderthal specific genes that were lost due to random genetic drift with this kind of analysis, in turn, allows us to significantly narrow the parameter space of population model assumptions that could produce the observed amount of random genetic drift. The observed proportion of random genetic drift in the Neanderthal genome would be particularly relevant in placing bounds on the paramater space for assumptions about effective modern human population size at the time of admixture, number of admixture events, spacing of admixture events, and the boom and bust variability in effective modern human population size. And, there are independent ways to provide additional bounds on many of these parameters from other lines of population genetic data and anthropology and the physical anthropology of Neanderthal remains, so the flexibility in one paramater doesn't inject too much flexibility into other paramaters.
Also, a reasonably tightly bound overall estimate of the magnitude of random genetic drift from the proportion of the Neanderthal genome that has been purged from modern humans provides a robust, and fairly direct estimate, from the longest time period for which ancient DNA is available for hominins, that can be used to inform estimates of the rate at which selectively neutral genes are purged by genetic drift in modern humans that is relatively population model independent for use in analysis of non-Neanderthal admixture population genetics (e.g. in estimates related to Denisovian admixture, putative African archaic admixture, admixtures of modern human populations in the Upper Paleolithic era, and the accuracy of estimates of the probability that a chance in the proportion of a particular gene in a population was due to random genetic drift or selection), since the error bars on this direct measure of random genetic drift in autosomal genes over that time period would be much smaller than the error bars around estimates of any of the specific parameters in parameter space that could be used to estimate it from first principles using population models alone. Thus, making this estimate in the Neanderthal case would materially improve the statistical power of all of our long term population genetic estimates, a contribution that may be unique and may not be available with greater precision from any other set of data for the foreseeable future.
Explicitly estimating the impact of selective effects and the loss of genes due to random genetic drift is also likely to establish that the total number of archaic admixture events was larger than an estimate that ignores these effects, because, on balance, these effects tend to reduce the number of Neanderthal genes in the modern human genome. Thus, the process of estimating these numbers of likely to reveal that Neanderthals and modern humans had sex more often that a crude back of napkin estimate would suggest. And, if the kind of process assumptions (Haldane's rule which also impacts fertility assumptions, and predominantly female modern human mothers for hybrid children born into modern human tribes that averted extinction, which implies that there are large numbers of uncounted cases where Neanderthal mothers that were erased from modern human populations in the present) that most naturally explain the disconnect between autosomal genetic data and uniparental genetic data are also incorporated into the analysis, the amount of cross species sexual activity between Neanderthals and modern humans may have been quite a bit higher indeed than the current percentage of our autosomal genome attributable to Neanderthal genes would suggest, probably on the order of a factor of two to five, which would be roughly the difference between a once in a generation event (a crude estimate without these considerations) and something like a once every few years event.
My intuition is that the amount of allele loss due to random genetic drift acting on selectively neutral genes that is actually observed in the Neanderthal case would suggest that the magnitude of the impact of random genetic drift in purging selectively neutral genes from modern human populations is quite a bit smaller than could safely be inferred by a naiive estimate based on other existing data and pure population modeling not supported by this kind of empirical calibration. Thus, I suspect that this data will, generally, favor findings that it is more likely that a given chance in gene frequency was a selective effect rather than a random one, and that populations not subject to selective pressures are more genetically stable than one might naiively expect even with a fairly careful theoretical analysis. |
e2d4d910920ecc00 | "The Solar Wind as a Turbulence Laboratory"
Roberto Bruno and Vincenzo Carbone
1 Introduction
1.1 What does turbulence stand for?
1.2 Dynamics vs. statistics
2 Equations and Phenomenology
2.1 The Navier–Stokes equation and the Reynolds number
2.2 The coupling between a charged fluid and the magnetic field
2.3 Scaling features of the equations
2.4 The non-linear energy cascade
2.5 The inhomogeneous case
2.6 Dynamical system approach to turbulence
2.7 Shell models for turbulence cascade
2.8 The phenomenology of fully developed turbulence: Fluid-like case
2.9 The phenomenology of fully developed turbulence: Magnetically-dominated case
2.10 Some exact relationships
2.11 Yaglom’s law for MHD turbulence
2.12 Density-mediated Elsässer variables and Yaglom’s law
2.13 Yaglom’s law in the shell model for MHD turbulence
3 Early Observations of MHD Turbulence in the Ecliptic
3.1 Turbulence in the ecliptic
3.2 Turbulence studied via Elsässer variables
4 Observations of MHD Turbulence in the Polar Wind
4.1 Evolving turbulence in the polar wind
4.2 Polar turbulence studied via Elsässer variables
5 Numerical Simulations
5.1 Local production of Alfvénic turbulence in the ecliptic
5.2 Local production of Alfvénic turbulence at high latitude
6 Compressive Turbulence
6.1 On the nature of compressive turbulence
6.2 Compressive turbulence in the polar wind
6.3 The effect of compressive phenomena on Alfvénic correlations
7 A Natural Wind Tunnel
7.1 Scaling exponents of structure functions
7.2 Probability distribution functions and self-similarity of fluctuations
7.4 Fragmentation models for the energy transfer rate
7.5 A model for the departure from self-similarity
7.6 Intermittency properties recovered via a shell model
8 Observations of Yaglom’s Law in Solar Wind Turbulence
9.1 Structure functions
9.2 Probability distribution functions
10 Turbulent Structures
10.1 On the statistics of magnetic field directional fluctuations
10.2 Radial evolution of intermittency in the ecliptic
10.3 Radial evolution of intermittency at high latitude
11 Solar Wind Heating by the Turbulent Energy Cascade
11.1 Dissipative/dispersive range in the solar wind turbulence
12 The Origin of the High-Frequency Region
12.1 A dissipation range
12.2 A dispersive range
13 Two Further Questions About Small-Scale Turbulence
13.1 Whistler modes scenario
13.2 Kinetic Alfvén waves scenario
14 Conclusions and Remarks
A Some Characteristic Solar Wind Parameters
B Tools to Analyze MHD Turbulence in Space Plasmas
B.1 Statistical description of MHD turbulence
B.2 Spectra of the invariants in homogeneous turbulence
B.3 Introducing the Elsässer variables
C Wavelets as a Tool to Study Intermittency
D Reference Systems
D.1 Minimum variance reference system
D.2 The mean field reference system
E On-board Plasma and Magnetic Field Instrumentation
E.1 Plasma instrument: The top-hat
E.2 Measuring the velocity distribution function
E.3 Computing the moments of the velocity distribution function
E.4 Field instrument: The flux-gate magnetometer
F Spacecraft and Datasets
2 Equations and Phenomenology
In this section, we present the basic equations that are used to describe charged fluid flows, and the basic phenomenology of low-frequency turbulence. Readers interested in examining closely this subject can refer to the very wide literature on the subject of turbulence in fluid flows, as for example the recent books by, e.g., Pope (2000Jump To The Next Citation Point); McComb (1990); Frisch (1995Jump To The Next Citation Point) or many others, and the less known literature on MHD flows (Biskamp, 1993Jump To The Next Citation Point; Boyd and Sanderson, 2003; Biskamp, 2003Jump To The Next Citation Point). In order to describe a plasma as a continuous medium it will be assumed collisional and, as a consequence, all quantities will be functions of space r and time t. Apart for the required quasi-neutrality, the basic assumption of MHD is that fields fluctuate on the same time and length scale as the plasma variables, say ω τ ≃ 1 H and kLH ≃ 1 (k and ω are, respectively, the wave number and the frequency of the fields, while τH and LH are the hydrodynamic time and length scale, respectively). Since the plasma is treated as a single fluid, we have to take the slow rates of ions. A simple analysis shows also that the electrostatic force and the displacement current can be neglected in the non-relativistic approximation. Then, MHD equations can be derived as shown in the following sections.
2.1 The Navier–Stokes equation and the Reynolds number
Equations which describe the dynamics of real incompressible fluid flows have been introduced by Claude-Louis Navier in 1823 and improved by George G. Stokes. They are nothing but the momentum equation based on Newton’s second law, which relates the acceleration of a fluid particle2 to the resulting volume and body forces acting on it. These equations have been introduced by Leonhard Euler, however, the main contribution by Navier was to add a friction forcing term due to the interactions between fluid layers which move with different speed. This term results to be proportional to the viscosity coefficients η and ξ and to the variation of speed. By defining the velocity field u (r,t) the kinetic pressure p and the density ρ, the equations describing a fluid flow are the continuity equation to describe the conservation of mass
∂ ρ --- + (u ⋅ ∇ )ρ = − ρ∇ ⋅ u, (1 ) ∂t
the equation for the conservation of momentum
[ ] ∂u- 2 ( η) ρ ∂t + (u ⋅ ∇) u = − ∇p + η∇ u + ξ + 3 ∇ (∇ ⋅ u ), (2 )
and an equation for the conservation of energy
[ ] ( )2 ∂s- η- ∂ui- ∂uk- 2- 2 ρT ∂t + (u ⋅ ∇ )s = ∇ ⋅ (χ ∇T ) + 2 ∂xk + ∂xi − 3 δik∇ ⋅ u + ξ (∇ ⋅ u) , (3 )
where s is the entropy per mass unit, T is the temperature, and χ is the coefficient of thermoconduction. An equation of state closes the system of fluid equations.
The above equations considerably simplify if we consider the incompressible fluid, where ρ = const. so that we obtain the Navier–Stokes (NS) equation
( ) ∂u ∇p 2 -∂t + (u ⋅ ∇ )u = − -ρ-- + ν ∇ u, (4 )
where the coefficient ν = η∕ρ is the kinematic viscosity. The incompressibility of the flow translates in a condition on the velocity field, namely the field is divergence-free, i.e., ∇ ⋅ u = 0. This condition eliminates all high-frequency sound waves and is called the incompressible limit. The non-linear term in equations represents the convective (or substantial) derivative. Of course, we can add on the right hand side of this equation all external forces, which eventually act on the fluid parcel.
We use the velocity scale U and the length scale L to define dimensionless independent variables, namely ′ r = r L (from which ′ ∇ = ∇ ∕L) and ′ t = t(L∕U ), and dependent variables ′ u = u U and ′ 2 p = p U ρ. Then, using these variables in Equation (4View Equation), we obtain
∂u-′ ′ ′ ′ ′′ − 1 ′2 ′ ∂t′ + (u ⋅ ∇ )u = − ∇ p + Re ∇ u . (5 )
The Reynolds number Re = UL ∕ν is evidently the only parameter of the fluid flow. This defines a Reynolds number similarity for fluid flows, namely fluids with the same value of the Reynolds number behaves in the same way. Looking at Equation (5View Equation) it can be realized that the Reynolds number represents a measure of the relative strength between the non-linear convective term and the viscous term in Equation (4View Equation). The higher Re, the more important the non-linear term is in the dynamics of the flow. Turbulence is a genuine result of the non-linear dynamics of fluid flows.
2.2 The coupling between a charged fluid and the magnetic field
Magnetic fields are ubiquitous in the Universe and are dynamically important. At high frequencies, kinetic effects are dominant, but at frequencies lower than the ion cyclotron frequency, the evolution of plasma can be modeled using the MHD approximation. Furthermore, dissipative phenomena can be neglected at large scales although their effects will be felt because of non-locality of non-linear interactions. In the presence of a magnetic field, the Lorentz force j × B, where j is the electric current density, must be added to the fluid equations, namely
[ ] ∂u- 2 ( η-) 1-- ρ ∂t + (u ⋅ ∇ ) u = − ∇p + η∇ u + ξ + 3 ∇ (∇ ⋅ u ) − 4πB × (∇ × B), (6 )
and the Joule heat must be added to the equation for energy
[ ] ∂s- ∂ui- 2 --c2-- 2 ρT ∂t + (u ⋅ ∇ )s = σik ∂xk + χ ∇ T + 16π2 σ(∇ × B ), (7 )
where σ is the conductivity of the medium, and we introduced the viscous stress tensor
( ∂ui ∂uk 2 ) σik = η ----+ ---- − --δik∇ ⋅ u + ξ δik∇ ⋅ u. (8 ) ∂xk ∂xi 3
An equation for the magnetic field stems from the Maxwell equations in which the displacement current is neglected under the assumption that the velocity of the fluid under consideration is much smaller than the speed of light. Then, using
∇ × B = μ0j
and the Ohm’s law for a conductor in motion with a speed u in a magnetic field
j = σ (E + u × B ),
we obtain the induction equation which describes the time evolution of the magnetic field
∂B--= ∇ × (u × B ) + (1∕σ μ0)∇2B, (9 ) ∂t
together with the constraint ∇ ⋅ B = 0 (no magnetic monopoles in the classical case).
In the incompressible case, where ∇ ⋅ u = 0, MHD equations can be reduced to
∂u --- + (u ⋅ ∇ )u = − ∇Ptot + ν ∇2u + (b ⋅ ∇ )b (10 ) ∂t
∂b- 2 ∂t + (u ⋅ ∇ ) b = − (b ⋅ ∇ )u + η ∇ b. (11 )
Here Ptot is the total kinetic Pk = nkT plus magnetic pressure 2 Pm = B ∕8π, divided by the constant mass density ρ. Moreover, we introduced the velocity variables √ ---- b = B ∕ 4πρ and the magnetic diffusivity η.
Similar to the usual Reynolds number, a magnetic Reynolds number Rm can be defined, namely
cAL0 Rm = -----, η
where √ ---- cA = B0 ∕ 4πρ is the Alfvén speed related to the large-scale L0 magnetic field B0. This number in most circumstances in astrophysics is very large, but the ratio of the two Reynolds numbers or, in other words, the magnetic Prandtl number Pm = ν ∕η can differ widely. In absence of dissipative terms, for each volume V MHD equations conserve the total energy E (t)
∫ E (t) = (v2 + b2)d3r , (12 ) V
the cross-helicity Hc (t), which represents a measure of the degree of correlations between velocity and magnetic fields
∫ 3 Hc(t) = v ⋅ b d r, (13 ) V
and the magnetic helicity H (t), which represents a measure of the degree of linkage among magnetic flux tubes
∫ 3 H (t) = a ⋅ bd r, (14 ) V
where b = ∇ × a.
The change of variable due to Elsässer (1950Jump To The Next Citation Point), say z± = u ± b′, where we explicitly use the background uniform magnetic field b ′ = b + cA (at variance with the bulk velocity, the largest scale magnetic field cannot be eliminated through a Galilean transformation), leads to the more symmetrical form of the MHD equations in the incompressible case
∂z-± ± ( ∓ ) ± ± 2 ± ∓ 2 ∓ ± ∂t ∓ (cA ⋅ ∇ )z + z ⋅ ∇ z = − ∇Ptot + ν ∇ z + ν ∇ z + F , (15 )
where ± 2ν = ν ± η are the dissipative coefficients, and ± F are eventual external forcing terms. The relations ∇ ⋅ z± = 0 complete the set of equations. On linearizing Equation (15View Equation) and neglecting both the viscous and the external forcing terms, we have
± ∂z--∓ (cA ⋅ ∇ ) z± ≃ 0, ∂t
which shows that − z (x − cAt) describes Alfvénic fluctuations propagating in the direction of B0, and z+ (x + cAt) describes Alfvénic fluctuations propagating opposite to B0. Note that MHD Equations (15View Equation) have the same structure as the Navier–Stokes equation, the main difference stems from the fact that non-linear coupling happens only between fluctuations propagating in opposite directions. As we will see, this has a deep influence on turbulence described by MHD equations.
It is worthwhile to remark that in the classical hydrodynamics, dissipative processes are defined through three coefficients, namely two viscosities and one thermoconduction coefficient. In the hydromagnetic case the number of coefficients increases considerably. Apart from few additional electrical coefficients, we have a large-scale (background) magnetic field B0. This makes the MHD equations intrinsically anisotropic. Furthermore, the stress tensor (8View Equation) is deeply modified by the presence of a magnetic field B0, in that kinetic viscous coefficients must depend on the magnitude and direction of the magnetic field (Braginskii, 1965). This has a strong influence on the determination of the Reynolds number.
2.3 Scaling features of the equations
The scaled Euler equations are the same as Equations (4View Equation and 5View Equation), but without the term proportional to R −1. The scaled variables obtained from the Euler equations are, then, the same. Thus, scaled variables exhibit scaling similarity, and the Euler equations are said to be invariant with respect to scale transformations. Said differently, this means that NS Equations (4View Equation) show scaling properties (Frisch, 1995Jump To The Next Citation Point), that is, there exists a class of solutions which are invariant under scaling transformations. Introducing a length scale ℓ, it is straightforward to verify that the scaling transformations ℓ → λ ℓ′ and h ′ u → λ u (λ is a scaling factor and h is a scaling index) leave invariant the inviscid NS equation for any scaling exponent h, providing 2h ′ P → λ P. When the dissipative term is taken into account, a characteristic length scale exists, say the dissipative scale ℓD. From a phenomenological point of view, this is the length scale where dissipative effects start to be experienced by the flow. Of course, since ν is in general very low, we expect that ℓ D is very small. Actually, there exists a simple relationship for the scaling of ℓD with the Reynolds number, namely −3∕4 ℓD ∼ LRe. The larger the Reynolds number, the smaller the dissipative length scale.
As it is easily verified, ideal MHD equations display similar scaling features. Say the following scaling transformations u → λhu ′ and B → λ βB′ (β here is a new scaling index different from h), leave the inviscid MHD equations unchanged, providing 2β ′ P → λ P, 2h ′ T → λ T, and 2(β− h) ′ ρ → λ ρ. This means that velocity and magnetic variables have different scalings, say h ⁄= β, only when the scaling for the density is taken into account. In the incompressible case, we cannot distinguish between scaling laws for velocity and magnetic variables.
2.4 The non-linear energy cascade
The basic properties of turbulence, as derived both from the Navier–Stokes equation and from phenomenological considerations, is the legacy of A. N. Kolmogorov (Frisch, 1995Jump To The Next Citation Point).3 Phenomenology is based on the old picture by Richardson who realized that turbulence is made by a collection of eddies at all scales. Energy, injected at a length scale L, is transferred by non-linear interactions to small scales where it is dissipated at a characteristic scale ℓD, the length scale where dissipation takes place. The main idea is that at very large Reynolds numbers, the injection scale L and the dissipative scale ℓD are completely separated. In a stationary situation, the energy injection rate must be balanced by the energy dissipation rate and must also be the same as the energy transfer rate 𝜀 measured at any scale ℓ within the inertial range ℓD ≪ ℓ ≪ L. From a phenomenological point of view, the energy injection rate at the scale L is given by 2 𝜖L ∼ U ∕τL, where τL is a characteristic time for the injection energy process, which results to be τL ∼ L ∕U. At the same scale L the energy dissipation rate is due to 𝜖D ∼ U 2∕ τD, where τD is the characteristic dissipation time which, from Equation (4View Equation), can be estimated to be of the order of 2 τD ∼ L ∕ν. As a result, the ratio between the energy injection rate and dissipation rate is
𝜖L τD --- ∼ ---∼ Re , 𝜖D τL
that is, the energy injection rate at the largest scale L is Re-times the energy dissipation rate. In other words, in the case of large Reynolds numbers, the fluid system is unable to dissipate the whole energy injected at the scale L. The excess energy must be dissipated at small scales where the dissipation process is much more efficient. This is the physical reason for the energy cascade.
Fully developed turbulence involves a hierarchical process, in which many scales of motion are involved. To look at this phenomenon it is often useful to investigate the behavior of the Fourier coefficients of the fields. Assuming periodic boundary conditions the α-th component of velocity field can be Fourier decomposed as
∑ uα(r,t) = u α(k,t)exp(ik ⋅ r), k
where k = 2πn ∕L and n is a vector of integers. When used in the Navier–Stokes equation, it is a simple matter to show that the non-linear term becomes the convolution sum
∂u α(k,t) ∑ ---∂t---- = M αβγ(k ) u γ(k − q,t)uβ(q,t), (16 ) q
where M (k) = − ik (δ − k k ∕k2) αβγ β αγ α β (for the moment we disregard the linear dissipative term).
MHD equations can be written in the same way, say by introducing the Fourier decomposition for Elsässer variables
∑ z±α(r,t) = z±α (k,t)exp (ik ⋅ r), k
and using this expression in the MHD equations we obtain an equation which describes the time evolution of each Fourier mode. However, the divergence-less condition means that not all Fourier modes are independent, rather ± k ⋅ z (k,t) = 0 means that we can project the Fourier coefficients on two directions which are mutually orthogonal and orthogonal to the direction of k, that is,
∑2 z±(k,t) = z±a (k,t)e(a)(k ), (17 ) a=1
with the constraint that (a) k ⋅ e (k) = 0. In presence of a background magnetic field we can use the well defined direction B0, so that
ik × B0 ik e (1)(k) = ---------; e (2)(k) = --- × e(1)(k). |k × B0 | |k|
Note that in the linear approximation where the Elsässer variables represent the usual MHD modes, z± (k,t) 1 represent the amplitude of the Alfvén mode while z± (k,t) 2 represent the amplitude of the incompressible limit of the magnetosonic mode. From MHD Equations (15View Equation) we obtain the following set of equations:
[ ] ( )3 δ 2 ∂-- ± L-- ∑ ∑ ± ∓ ∂t ∓ i(k ⋅ cA) za (k,t) = 2π Aabc(− k, p,q )z b (p,t)zc (q, t). (18 ) p+q=kb,c=1
The coupling coefficients, which satisfy the symmetry condition Aabc(k,p, q) = − Abac(p,k,q ), are defined as
A (− k, p,q) = [(ik)⋆ ⋅ e(c)(q)][e (a)∗(k ) ⋅ e(b)(p )], abc
and the sum in Equation (18View Equation) is defined as
δ ( )3 ∑ 2π- ∑ ∑ ≡ L δk,p+q, p+q=k p q
where δk,p+q is the Kronecher’s symbol. Quadratic non-linearities of the original equations correspond to a convolution term involving wave vectors k, p and q related by the triangular relation p = k − q. Fourier coefficients locally couple to generate an energy transfer from any pair of modes p and q to a mode k = p + q.
The pseudo-energies ± E (t) are defined as
1 1 ∫ 1∑ ∑2 E ±(t) = ---- |z±(r,t)|2d3r = -- |za±(k,t)|2 2L3 L3 2 k a=1
and, after some algebra, it can be shown that the non-linear term of Equation (18View Equation) conserves separately E ±(t). This means that both the total energy E(t) = E+ + E − and the cross-helicity + − Ec (t) = E − E, say the correlation between velocity and magnetic field, are conserved in absence of dissipation and external forcing terms.
In the idealized homogeneous and isotropic situation we can define the pseudo-energy tensor, which using the incompressibility condition can be written as
( L )3 ⟨ ⟩ ( kakb ) U ±ab(k, t) ≡ --- z±a (k,t)z±b (k,t) = δab −---2- q±(k), 2 π k
brackets being ensemble averages, where ± q (k ) is an arbitrary odd function of the wave vector k and represents the pseudo-energies spectral density. When integrated over all wave vectors under the assumption of isotropy
[∫ ] ∫ ∞ 3 ± ± Tr d k Uab(k,t) = 2 0 E (k,t)dk,
where we introduce the spectral pseudo-energy E± (k,t) = 4πk2q± (k, t). This last quantity can be measured, and it is shown that it satisfies the equations
∂E-±(k,t)- ± 2 ± ± ∂t = T (k,t) − 2νk E (k, t) + F (k,t). (19 )
We use ν = η in order not to worry about coupling between + and − modes in the dissipative range. Since the non-linear term conserves total pseudo-energies we have
∫ ∞ dk T ±(k,t) = 0, 0
so that, when integrated over all wave vectors, we obtain the energy balance equation for the total pseudo-energies
dE ±(t) ∫ ∞ ∫ ∞ -------= dkF ±(k, t) − 2ν dk k2E ±(k,t). (20 ) dt 0 0
This last equation simply means that the time variations of pseudo-energies are due to the difference between the injected power and the dissipated power, so that in a stationary state
∫ ∫ ∞ ± ∞ 2 ± ± dk F (k,t) = 2ν dk k E (k,t) = 𝜖 . 0 0
Looking at Equation (19View Equation), we see that the role played by the non-linear term is that of a redistribution of energy among the various wave vectors. This is the physical meaning of the non-linear energy cascade of turbulence.
2.5 The inhomogeneous case
Equations (19View Equation) refer to the standard homogeneous and incompressible MHD. Of course, the solar wind is inhomogeneous and compressible and the energy transfer equations can be as complicated as we want by modeling all possible physical effects like, for example, the wind expansion or the inhomogeneous large-scale magnetic field. Of course, simulations of all turbulent scales requires a computational effort which is beyond the actual possibilities. A way to overcome this limitation is to introduce some turbulence modeling of the various physical effects. For example, a set of equations for the cross-correlation functions of both Elsässer fluctuations have been developed independently by Marsch and Tu (1989Jump To The Next Citation Point), Zhou and Matthaeus (1990Jump To The Next Citation Point), Oughton and Matthaeus (1992), and Tu and Marsch (1990aJump To The Next Citation Point), following Marsch and Mangeney (1987Jump To The Next Citation Point) (see review by Tu and Marsch, 1996Jump To The Next Citation Point), and are based on some rather strong assumptions: i) a two-scale separation, and ii) small-scale fluctuations are represented as a kind of stochastic process (Tu and Marsch, 1996Jump To The Next Citation Point). These equations look quite complicated, and just a comparison based on order-of-magnitude estimates can be made between them and solar wind observations (Tu and Marsch, 1996).
A different approach, introduced by Grappin et al. (1993), is based on the so-called “expanding-box model” (Grappin and Velli, 1996Jump To The Next Citation Point; Liewer et al., 2001Jump To The Next Citation Point; Hellinger et al., 2005). The model uses transformation of variables to the moving solar wind frame that expands together with the size of the parcel of plasma as it propagates outward from the Sun. Despite the model requires several simplifying assumptions, like for example lateral expansion only for the wave-packets and constant solar wind speed, as well as a second-order approximation for coordinate transformation Liewer et al. (2001) to remain tractable, it provides qualitatively good description of the solar wind expansions, thus connecting the disparate scales of the plasma in the various parts of the heliosphere.
2.6 Dynamical system approach to turbulence
In the limit of fully developed turbulence, when dissipation goes to zero, an infinite range of scales are excited, that is, energy lies over all available wave vectors. Dissipation takes place at a typical dissipation length scale which depends on the Reynolds number Re through −3∕4 ℓD ∼ LRe (for a Kolmogorov spectrum −5∕3 E (k) ∼ k). In 3D numerical simulations the minimum number of grid points necessary to obtain information on the fields at these scales is given by N ∼ (L ∕ℓD)3 ∼ Re9 ∕4. This rough estimate shows that a considerable amount of memory is required when we want to perform numerical simulations with high Re. At present, typical values of Reynolds numbers reached in 2D and 3D numerical simulations are of the order of 4 10 and 3 10, respectively. At these values the inertial range spans approximately one decade or a little more.
Given the situation described above, the question of the best description of dynamics which results from original equations, using only a small amount of degree of freedom, becomes a very important issue. This can be achieved by introducing turbulence models which are investigated using tools of dynamical system theory (Bohr et al., 1998). Dynamical systems, then, are solutions of minimal sets of ordinary differential equations that can mimic the gross features of energy cascade in turbulence. These studies are motivated by the famous Lorenz’s model (Lorenz, 1963) which, containing only three degrees of freedom, simulates the complex chaotic behavior of turbulent atmospheric flows, becoming a paradigm for the study of chaotic systems.
The Lorenz’s model has been used as a paradigm as far as the transition to turbulence is concerned. Actually, since the solar wind is in a state of fully developed turbulence, the topic of the transition to turbulence is not so close to the main goal of this review. However, since their importance in the theory of dynamical systems, we spend few sentences abut this central topic. Up to the Lorenz’s chaotic model, studies on the birth of turbulence dealt with linear and, very rarely, with weak non-linear evolution of external disturbances. The first physical model of laminar-turbulent transition is due to Landau and it is reported in the fourth volume of the course on Theoretical Physics (Landau and Lifshitz, 1971Jump To The Next Citation Point). According to this model, as the Reynolds number is increased, the transition is due to a infinite series of Hopf bifurcations at fixed values of the Reynolds number. Each subsequent bifurcation adds a new incommensurate frequency to the flow whose dynamics become rapidly quasi-periodic. Due to the infinite number of degree of freedom involved, the quasi-periodic dynamics resembles that of a turbulent flow.
The Landau transition scenario is, however, untenable because incommensurate frequencies cannot exist without coupling between them. Ruelle and Takens (1971Jump To The Next Citation Point) proposed a new mathematical model, according to which after few, usually three, Hopf bifurcations the flow becomes suddenly chaotic. In the phase space this state is characterized by a very intricate attracting subset, a strange attractor. The flow corresponding to this state is highly irregular and strongly dependent on initial conditions. This characteristic feature is now known as the butterfly effect and represents the true definition of deterministic chaos. These authors indicated as an example for the occurrence of a strange attractor the old strange time behavior of the Lorenz’s model. The model is a paradigm for the occurrence of turbulence in a deterministic system, it reads
dx- dy- dz- dt = Pr(y − x ), dt = Rx − y − xz , dt = xy − bz , (21 )
where x(t), y(t), and z(t) represent the first three modes of a Fourier expansion of fluid convective equations in the Boussinesq approximation, Pr is the Prandtl number, b is a geometrical parameter, and R is the ratio between the Rayleigh number and the critical Rayleigh number for convective motion. The time evolution of the variables x(t), y(t), and z(t) is reported in Figure 12View Image. A reproduction of the Lorenz butterfly attractor, namely the projection of the variables on the plane (x,z) is shown in Figure 13View Image. A few years later, Gollub and Swinney (1975) performed very sophisticated experiments,4 concluding that the transition to turbulence in a flow between co-rotating cylinders is described by the Ruelle and Takens (1971) model rather than by the Landau scenario.
View Image
Figure 12: Time evolution of the variables x(t), y (t), and z (t) in the Lorenz’s model (see Equation (21View Equation)). This figure has been obtained by using the parameters Pr = 10, b = 8∕3, and R = 28.
View Image
Figure 13: The Lorenz butterfly attractor, namely the time behavior of the variables z(t) vs. x(t) as obtained from the Lorenz’s model (see Equation (21View Equation)). This figure has been obtained by using the parameters Pr = 10, b = 8∕3, and R = 28.
After this discovery, the strange attractor model gained a lot of popularity, thus stimulating a large number of further studies on the time evolution of non-linear dynamical systems. An enormous number of papers on chaos rapidly appeared in literature, quite in all fields of physics, and transition to chaos became a new topic. Of course, further studies on chaos rapidly lost touch with turbulence studies and turbulence, as reported by Feynman et al. (1977), still remains …the last great unsolved problem of the classical physics. Furthermore, we like to cite recent theoretical efforts made by Chian and coworkers (Chian et al., 1998, 2003) related to the onset of Alfvénic turbulence. These authors, numerically solved the derivative non-linear Schrödinger equation (Mjølhus, 1976; Ghosh and Papadopoulos, 1987) which governs the spatio-temporal dynamics of non-linear Alfvén waves, and found that Alfvénic intermittent turbulence is characterized by strange attractors. Note that, the physics involved in the derivative non-linear Schrödinger equation, and in particular the spatio-temporal dynamics of non-linear Alfvén waves, cannot be described by the usual incompressible MHD equations. Rather dispersive effects are required. At variance with the usual MHD, this can be satisfied by requiring that the effect of ion inertia be taken into account. This results in a generalized Ohm’s law by including a (j × B¯) ¯-term, which represents the compressible Hall correction to MHD, say the so-called compressible Hall-MHD model.
In this context turbulence can evolve via two distinct routes: Pomeau–Manneville intermittency (Pomeau and Manneville, 1980) and crisis-induced intermittency (Ott and Sommerer, 1994). Both types of chaotic transitions follow episodic switching between different temporal behaviors. In one case (Pomeau–Manneville) the behavior of the magnetic fluctuations evolve from nearly periodic to chaotic while, in the other case the behavior intermittently assumes weakly chaotic or strongly chaotic features.
2.7 Shell models for turbulence cascade
Since numerical simulations, in some cases, cannot be used, simple dynamical systems can be introduced to investigate, for example, statistical properties of turbulent flows which can be compared with observations. These models, which try to mimic the gross features of the time evolution of spectral Navier–Stokes or MHD equations, are often called “shell models” or “discrete cascade models”. Starting from the old papers by Siggia (1977) different shell models have been introduced in literature for 3D fluid turbulence (Biferale, 2003). MHD shell models have been introduced to describe the MHD turbulent cascade (Plunian et al., 2012Jump To The Next Citation Point), starting from the paper by Gloaguen et al. (1985Jump To The Next Citation Point).
The most used shell model is usually quoted in literature as the GOY model, and has been introduced some time ago by Gledzer (1973) and by Ohkitani and Yamada (1989). Apart from the first MHD shell model (Gloaguen et al., 1985Jump To The Next Citation Point), further models, like those by Frick and Sokoloff (1998) and Giuliani and Carbone (1998Jump To The Next Citation Point) have been introduced and investigated in detail. In particular, the latter ones represent the counterpart of the hydrodynamic GOY model, that is they coincide with the usual GOY model when the magnetic variables are set to zero.
In the following, we will refer to the MHD shell model as the FSGC model. The shell model can be built up through four different steps:
a) Introduce discrete wave vectors:
As a first step we divide the wave vector space in a discrete number of shells whose radii grow according to a power kn = k0λn, where λ > 1 is the inter-shell ratio, k0 is the fundamental wave vector related to the largest available length scale L, and n = 1,2,...,N.
b) Assign to each shell discrete scalar variables:
Each shell is assigned two or more complex scalar variables un(t) and bn(t), or Elsässer variables Z ±(t) = un ± bn(t) n. These variables describe the chaotic dynamics of modes in the shell of wave vectors between k n and k n+1. It is worth noting that the discrete variable, mimicking the average behavior of Fourier modes within each shell, represents characteristic fluctuations across eddies at the scale ℓn ∼ k−n1. That is, the fields have the same scalings as field differences, for example Z ±n ∼ |Z ±(x + ℓn) − Z± (x)| ∼ ℓhn in fully developed turbulence. In this way, the possibility to describe spatial behavior within the model is ruled out. We can only get, from a dynamical shell model, time series for shell variables at a given kn, and we loose the fact that turbulence is a typical temporal and spatial complex phenomenon.
c) Introduce a dynamical model which describes non-linear evolution:
Looking at Equation (18View Equation) a model must have quadratic non-linearities among opposite variables Z ±(t) n and ∓ Zn (t), and must couple different shells with free coupling coefficients.
d) Fix as much as possible the coupling coefficients:
This last step is not standard. A numerical investigation of the model might require the scanning of the properties of the system when all coefficients are varied. Coupling coefficients can be fixed by imposing the conservation laws of the original equations, namely the total pseudo-energies
± 1-∑ || ±||2 E (t) = 2 Z n , n
that means the conservation of both the total energy and the cross-helicity:
1-∑ 2 2 ∑ ∗ E(t) = 2 |un | + |bn| ; Hc (t) = 2ℜe (unb n), n n
where ℜe indicates the real part of the product unb∗n. As we said before, shell models cannot describe spatial geometry of non-linear interactions in turbulence, so that we loose the possibility of distinguishing between two-dimensional and three-dimensional turbulent behavior. The distinction is, however, of primary importance, for example as far as the dynamo effect is concerned in MHD. However, there is a third invariant which we can impose, namely
2 ∑ n |bn|-- H (t) = (− 1 ) kα , (22 ) n n
which can be dimensionally identified as the magnetic helicity when α = 1, so that the shell model so obtained is able to mimic a kind of 3D MHD turbulence (Giuliani and Carbone, 1998Jump To The Next Citation Point).
After some algebra, taking into account both the dissipative and forcing terms, FSGC model can be written as
dZ±n-- ±∗ ν-±-μ- 2 + ν ∓-μ- 2 − ± dt = iknΦ n + 2 k nZn + 2 k nZn + F n , (23 )
( ) ( ) Φ ± = 2-−-a-−-c Z± Z ∓ + a +-c Z± Z ∓ + n 2 n+2 n+1 2 n+1 n+2 ( c − a ) ( a + c) + ----- Z±n−1Z ∓n+1 − ----- Zn∓− 1Z±n+1 + ( 2λ ) ( 2λ ) c − a ∓ ± 2 − a − c ∓ ± − ----2 Zn−2Z n−1 − -----2--- Zn− 1Z n−2, (24 ) 2λ 2 λ
where5 λ = 2, a = 1 ∕2, and c = 1∕3. In the following, we will consider only the case where the dissipative coefficients are the same, i.e., ν = μ.
2.8 The phenomenology of fully developed turbulence: Fluid-like case
Here we present the phenomenology of fully developed turbulence, as far as the scaling properties are concerned. In this way we are able to recover a universal form for the spectral pseudo-energy in the stationary case. In real space a common tool to investigate statistical properties of turbulence is represented by field increments ± ± ± Δz ℓ (r ) = [z (r + ℓ) − z (r)] ⋅ e, being e the longitudinal direction. These stochastic quantities represent fluctuations6 across eddies at the scale ℓ. The scaling invariance of MHD equations (cf. Section 2.3), from a phenomenological point of view, implies that we expect solutions where Δz ±ℓ ∼ ℓh. All the statistical properties of the field depend only on the scale ℓ, on the mean pseudo-energy dissipation rates 𝜀±, and on the viscosity ν. Also, ± 𝜀 is supposed to be the common value of the injection, transfer and dissipation rates. Moreover, the dependence on the viscosity only arises at small scales, near the bottom of the inertial range. Under these assumptions the typical pseudo-energy dissipation rate per unit mass scales as 𝜀± ∼ ( Δz ±)2∕t± ℓ ℓ. The time t± ℓ associated with the scale ℓ is the typical time needed for the energy to be transferred on a smaller scale, say the eddy turnover time ± ∓ tℓ ∼ ℓ∕Δz ℓ, so that
𝜀± ∼ (Δz ±)2 Δz ∓∕ℓ. ℓ
When we conjecture that both ± Δz fluctuations have the same scaling laws, namely ± h Δz ∼ ℓ, we recover the Kolmogorov scaling for the field increments
Δz ± ∼ (𝜀± )1∕3ℓ1∕3. (25 ) ℓ
Usually, we refer to this scaling as the K41 model (Kolmogorov, 1941Jump To The Next Citation Point, 1991Jump To The Next Citation Point; Frisch, 1995Jump To The Next Citation Point). Note that, since from dimensional considerations the scaling of the energy transfer rate should be 𝜀± ∼ ℓ1−3h, h = 1∕3 is the choice to guarantee the absence of scaling for 𝜀±.
In the real space turbulence properties can be described using either the probability distribution functions (PDFs hereafter) of increments, or the longitudinal structure functions, which represents nothing but the higher order moments of the field. Disregarding the magnetic field, in a purely fully developed fluid turbulence, this is defined as S(ℓp)= ⟨Δup ℓ⟩. These quantities, in the inertial range, behave as a power law S (p) ∼ ℓξp ℓ, so that it is interesting to compute the set of scaling exponent ξ p. Using, from a phenomenological point of view, the scaling for field increments (see Equation (25View Equation)), it is straightforward to compute the scaling laws S (ℓp) ∼ ℓp∕3. Then ξp = p∕3 results to be a linear function of the order p.
When we assume the scaling law Δz ± ∼ ℓh ℓ, we can compute the high-order moments of the structure functions for increments of the Elsässer variables, namely ⟨ ± p⟩ ξp (Δz ℓ ) ∼ ℓ, thus obtaining a linear scaling ξp = p∕3, similar to usual fluid flows. For Gaussianly distributed fields, a particular role is played by the second-order moment, because all moments can be computed from S (2) ℓ. It is straightforward to translate the dimensional analysis results to Fourier spectra. The spectral property of the field can be recovered from (2) Sℓ, say in the homogeneous and isotropic case
∫ ( ) (2) ∞ sinkℓ Sℓ = 4 E (k) 1 − --kℓ-- dk, 0
where k ∼ 1∕ℓ is the wave vector, so that in the inertial range where Equation (41View Equation) is verified
E(k) ∼ 𝜀2∕3k−5∕3. (26 )
The Kolmogorov spectrum (see Equation (26View Equation)) is largely observed in all experimental investigations of turbulence, and is considered as the main result of the K41 phenomenology of turbulence (Frisch, 1995Jump To The Next Citation Point). However, spectral analysis does not provide a complete description of the statistical properties of the field, unless this has Gaussian properties. The same considerations can be made for the spectral pseudo-energies E± (k), which are related to the 2nd order structure functions ⟨ [ ±]2⟩ Δz ℓ.
2.9 The phenomenology of fully developed turbulence: Magnetically-dominated case
The phenomenology of the magnetically-dominated case has been investigated by Iroshnikov (1963Jump To The Next Citation Point) and Kraichnan (1965Jump To The Next Citation Point), then developed by Dobrowolny et al. (1980bJump To The Next Citation Point) to tentatively explain the occurrence of the observed Alfvénic turbulence, and finally by Carbone (1993Jump To The Next Citation Point) and Biskamp (1993Jump To The Next Citation Point) to get scaling laws for structure functions. It is based on the Alfvén effect, that is, the decorrelation of interacting eddies, which can be explained phenomenologically as follows. Since non-linear interactions happen only between opposite propagating fluctuations, they are slowed down (with respect to the fluid-like case) by the sweeping of the fluctuations across each other. This means that ± ( ±)2 ± 𝜀 ∼ Δz ℓ ∕T ℓ but the characteristic time ± T ℓ required to efficiently transfer energy from an eddy to another eddy at smaller scales cannot be the eddy-turnover time, rather it is increased by a factor t±ℓ ∕tA (tA ∼ ℓ∕cA < t±ℓ is the Alfvén time), so that Tℓ± ∼ (t±ℓ )2∕tA. Then, immediately
± 2 ∓ 2 𝜀± ∼ [Δz-ℓ-][Δz-ℓ ]-. ℓcA
This means that both ± modes are transferred at the same rate to small scales, namely 𝜖+ ∼ 𝜖− ∼ 𝜖, and this is the conclusion drawn by Dobrowolny et al. (1980bJump To The Next Citation Point). In reality, this is not fully correct, namely the Alfvén effect yields to the fact that energy transfer rates have the same scaling laws for ± modes but, we cannot say anything about the amplitudes of + 𝜀 and − 𝜀 (Carbone, 1993Jump To The Next Citation Point). Using the usual scaling law for fluctuations, it can be shown that the scaling behavior holds 1− 4h ′ 𝜀 → λ 𝜀. Then, when the energy transfer rate is constant, we found a scaling law different from that of Kolmogorov and, in particular,
Δz ± ∼ (𝜀cA)1∕4ℓ1∕4. (27 ) ℓ
Using this phenomenology the high-order moments of fluctuations are given by (p) p∕4 Sℓ ∼ ℓ. Even in this case, ξp = p ∕4 results to be a linear function of the order p. The pseudo-energy spectrum can be easily found to be
E ±(k) ∼ (𝜀c )1∕2k−3∕2. (28 ) A
This is the Iroshnikov–Kraichnan spectrum. However, in a situation in which there is a balance between the linear Alfvén time scale or wave period, and the non-linear time scale needed to transfer energy to smaller scales, the energy cascade is indicated as critically balanced (Goldreich and Sridhar, 1995Jump To The Next Citation Point). In these conditions, it can be shown that the power spectrum P(k) would scale as −5∕3 f when the angle 𝜃B between the mean field direction and the flow direction is 90∘ while, the same scaling would follow f− 2 in case 𝜃 = 0∘ B and the spectrum would also have a smaller energy content than in the other case.
2.10 Some exact relationships
So far, we have been discussing about the inertial range of turbulence. What this means from a heuristic point of view is somewhat clear, but when we try to identify the inertial range from the spectral properties of turbulence, in general the best we can do is to identify the inertial range with the intermediate range of scales where a Kolmogorov’s spectrum is observed. The often used identity inertial range ≃ intermediate range, is somewhat arbitrary. In this regard, a very important result on turbulence, due to Kolmogorov (1941Jump To The Next Citation Point, 1991Jump To The Next Citation Point), is the so-called “4/5-law” which, being obtained from the Navier–Stokes equation, is “…one of the most important results in fully developed turbulence because it is both exact and nontrivial” (cf. Frisch, 1995Jump To The Next Citation Point). As a matter of fact, Kolmogorov analytically derived the following exact relation for the third order structure function of velocity fluctuations:
⟨(Δv (r,ℓ))3⟩ = − 4𝜖ℓ , (29 ) ∥ 5
where r is the sampling direction, ℓ is the corresponding scale, and 𝜖 is the mean energy dissipation per unit mass, assumed to be finite and nonvanishing.
This important relation can be obtained in a more general framework from MHD equations. A Yaglom’s relation for MHD can be obtained using the analogy of MHD equations with a transport equation, so that we can obtain a relation similar to the Yaglom’s equation for the transport of a passive quantity (Monin and Yaglom, 1975Jump To The Next Citation Point). Using the above analogy, the Yaglom’s relation has been extended some time ago to MHD turbulence by Chandrasekhar (1967Jump To The Next Citation Point), and recently it has been revised by Politano et al. (1998Jump To The Next Citation Point) and Politano and Pouquet (1998Jump To The Next Citation Point) in the framework of solar wind turbulence. In the following section we report an alternative and more general derivation of the Yaglom’s law using structure functions (Sorriso-Valvo et al., 2007Jump To The Next Citation Point; Carbone et al., 2009c).
2.11 Yaglom’s law for MHD turbulence
To obtain a general law we start from the incompressible MHD equations. If we write twice the MHD equations for two different and independent points xi and ′ xi = xi + ℓi, by substraction we obtain an equation for the vector differences ± ± ′ ± Δz i = (zi ) − zi. Using the hypothesis of independence of points ′ xi and xi with respect to derivatives, namely ∂i(z±j )′ = ∂′iz±j = 0 (where ∂′i represents derivative with respect to x′i), we get
± ∓ ′ ± ∓ ′ ± ′ ∂tΔz i + Δz α ∂αΔz i + z α(∂α + ∂α[)Δz i = − (∂i + ∂i])ΔP + + (∂2′α + ∂2α) ν ±Δz+i + ν∓ Δz −i (30 )
(′ ΔP = Ptot − Ptot). We look for an equation for the second-order correlation tensor ± ± ⟨Δz i Δz j ⟩ related to pseudo-energies. Actually the more general thing should be to look for a mixed tensor, namely ⟨Δz ±i Δz ∓j ⟩, taking into account not only both pseudo-energies but also the time evolution of the mixed correlations ⟨z+ z−⟩ i j and ⟨z− z+⟩ i j. However, using the DIA closure by Kraichnan, it is possible to show that these elements are in general poorly correlated (Veltri, 1980Jump To The Next Citation Point). Since we are interested in the energy cascade, we limit ourselves to the most interesting equation that describes correlations about Alfvénic fluctuations of the same sign. To obtain the equations for pseudo-energies we multiply Equations (30View Equation) by Δz ± j, then by averaging we get
± ± -∂-- ∓ ± ± ∂t⟨Δz i Δz j ⟩ + ∂ℓ ⟨ΔZ α(Δz i Δzj )⟩ = α 2 = − Λ − Π + 2ν ∂--⟨Δz ±Δz ± ⟩ − 4-∂--(𝜖±ℓ ), (31 ) ij ij ∂ℓ2α i j 3∂ ℓα ij α
where we used the hypothesis of local homogeneity and incompressibility. In Equation (31View Equation) we defined the average dissipation tensor
𝜖±ij = ν⟨(∂αZ ±i )(∂αZ ±j )⟩. (32 )
The first and second term on the r.h.s. of the Equation (31View Equation) represent respectively a tensor related to large-scales inhomogeneities
Λij = ⟨z∓α(∂ ′α + ∂ α)(Δz ±i Δz ±j )⟩ (33 )
and the tensor related to the pressure term
Π = ⟨Δz ±(∂ ′+ ∂ )ΔP + Δz ±(∂′ + ∂ )ΔP ⟩. (34 ) ij j i i i j j
Furthermore, In order not to worry about couplings between Elsässer variables in the dissipative terms, we make the usual simplifying assumption that kinematic viscosity is equal to magnetic diffusivity, that is ± ∓ ν = ν = ν. Equation (31View Equation) is an exact equation for anisotropic MHD equations that links the second-order complete tensor to the third-order mixed tensor via the average dissipation rate tensor. Using the hypothesis of global homogeneity the term Λij = 0, while assuming local isotropy Πij = 0. The equation for the trace of the tensor can be written as
∂ ∂2 4 ∂ ∂t⟨|Δz ±i |2⟩ +---⟨ΔZ ∓α|Δz ±i |2⟩ = 2ν---⟨|Δz ±i |2⟩ − -----(𝜖±iiℓα ), (35 ) ∂ℓα ∂ ℓα 3 ∂ℓα
where the various quantities depends on the vector ℓα. Moreover, by considering only the trace we ruled out the possibility to investigate anisotropies related to different orientations of vectors within the second-order moment. It is worthwhile to remark here that only the diagonal elements of the dissipation rate tensor, namely ± 𝜖ii are positive defined while, in general, the off-diagonal elements 𝜖±ij are not positive. For a stationary state the Equation (35View Equation) can be written as the divergenceless condition of a quantity involving the third-order correlations and the dissipation rates
∂ [ ∂ 4 ] ---- ⟨Δz ∓α |Δz ±i |2⟩ − 2ν----⟨|Δzi±|2⟩ − --(𝜖±iiℓα) = 0, (36 ) ∂ℓα ∂ ℓα 3
from which we can obtain the Yaglom’s relation by projecting Equation (36View Equation) along the longitudinal ℓα = ℓer direction. This operation involves the assumption that the flow is locally isotropic, that is fields depends locally only on the separation ℓ, so that
( ) [ ] 2- -∂- ∓ ± 2 ∂-- ± 2 4-± ℓ + ∂ℓ ⟨Δz ℓ |Δz i | ⟩ − 2ν ∂ℓ⟨|Δz i | ⟩ + 3𝜖iiℓ = 0. (37 )
The only solution that is compatible with the absence of singularity in the limit ℓ → 0 is
⟨Δz ∓|Δz ±|2⟩ = 2ν-∂-⟨|Δz ± |2⟩ − 4-𝜖±ℓ, (38 ) ℓ i ∂ ℓ i 3 ii
which reduces to the Yaglom’s law for MHD turbulence as obtained by Politano and Pouquet (1998Jump To The Next Citation Point) in the inertial range when ν → 0
± ∓ ± 2 4- ± Y ℓ ≡ ⟨Δzℓ |Δzi |⟩ = − 3 𝜖iiℓ. (39 )
Finally, in the fluid-like case where + − zi = zi = vi we obtain the usual Yaglom’s law for fluid flows
⟨Δv ℓ|Δvi|2⟩ = − 4-(𝜖ℓ) , (40 ) 3
which in the isotropic case, where ⟨Δv3ℓ⟩ = 3⟨Δv ℓΔv2y⟩ = 3⟨Δv ℓΔv2z⟩ (Monin and Yaglom, 1975Jump To The Next Citation Point), immediately reduces to the Kolmogorov’s law
3 4- ⟨Δv ℓ⟩ = − 5𝜖ℓ (41 )
(the separation ℓ has been taken along the streamwise x-direction).
The relations we obtained can be used, or better, in a certain sense they might be used, as a formal definition of inertial range. Since they are exact relationships derived from Navier–Stokes and MHD equations under usual hypotheses, they represent a kind of “zeroth-order” conditions on experimental and theoretical analysis of the inertial range properties of turbulence. It is worthwhile to remark the two main properties of the Yaglom’s laws. The first one is the fact that, as it clearly appears from the Kolmogorov’s relation (Kolmogorov, 1941Jump To The Next Citation Point), the third-order moment of the velocity fluctuations is different from zero. This means that some non-Gaussian features must be at work, or, which is the same, some hidden phase correlations. Turbulence is something more complicated than random fluctuations with a certain slope for the spectral density. The second feature is the minus sign which appears in the various relations. This is essential when the sign of the energy cascade must be inferred from the Yaglom relations, the negative asymmetry being a signature of a direct cascade towards smaller scales. Note that, Equation (39View Equation) has been obtained in the limit of zero viscosity assuming that the pseudo-energy dissipation rates ± 𝜖ii remain finite in this limit. In usual fluid flows the analogous hypothesis, namely 𝜖 remains finite in the limit ν → 0, is an experimental evidence, confirmed by experiments in different conditions (Frisch, 1995Jump To The Next Citation Point). In MHD turbulent flows this remains a conjecture, confirmed only by high resolution numerical simulations (Mininni and Pouquet, 2009).
From Equation (36View Equation), by defining ΔZ ±i = Δvi ± Δbi we immediately obtain the two equations
[ ] ∂ ∂ 4 ---- ⟨Δv αΔE ⟩ − 2⟨ΔbαΔC ⟩ − 2ν----⟨ΔE ⟩ −--(𝜖Eℓα) = 0 (42 ) ∂ℓα[ ∂ ℓα 3 ] -∂-- -∂-- 4- ∂ ℓα − ⟨Δb αΔE ⟩ + 2 ⟨Δv α ΔC ⟩ − 4ν∂ ℓα⟨ΔC ⟩ − 3 (𝜖Cℓα) = 0, (43 )
where we defined the energy fluctuations ΔE = |Δvi |2 + |Δbi|2 and the correlation fluctuations ΔC = ΔviΔbi. In the same way the quantities ( ) 𝜖E = 𝜖+ii + 𝜖−ii ∕2 and ( ) 𝜖C = 𝜖+ii − 𝜖−ii ∕2 represent the energy and correlation dissipation rate, respectively. By projecting once more on the longitudinal direction, and assuming vanishing viscosity, we obtain the Yaglom’s law written in terms of velocity and magnetic fluctuations
4- ⟨Δv ℓΔE ⟩ − 2⟨Δb ℓΔC ⟩ = − 3 𝜖Eℓ (44 ) 4 − ⟨Δb ℓΔE ⟩ + 2⟨Δv ℓΔC ⟩ = −--𝜖Cℓ. (45 ) 3
2.12 Density-mediated Elsässer variables and Yaglom’s law
Relation (39View Equation), which is of general validity within MHD turbulence, requires local characteristics of the turbulent fluid flow which can be not always satisfied in the solar wind flow, namely, large-scale homogeneity, isotropy, and incompressibility. Density fluctuations in solar wind have a low amplitude, so that nearly incompressible MHD framework is usually considered (Montgomery et al., 1987Jump To The Next Citation Point; Matthaeus and Brown, 1988Jump To The Next Citation Point; Zank and Matthaeus, 1993Jump To The Next Citation Point; Matthaeus et al., 1991Jump To The Next Citation Point; Bavassano and Bruno, 1995Jump To The Next Citation Point). However, compressible fluctuations are observed, typically convected structures characterized by anticorrelation between kinetic pressure and magnetic pressure (Tu and Marsch, 1994Jump To The Next Citation Point). Properties and interaction of the basic MHD modes in the compressive case have also been considered (Goldreich and Sridhar, 1995Jump To The Next Citation Point; Cho and Lazarian, 2002).
A first attempt to include density fluctuations in the framework of fluid turbulence was due to Lighthill (1955). He pointed out that, in a compressible energy cascade, the mean energy transfer rate per unit volume 𝜖V ∼ ρv3 ∕ℓ should be constant in a statistical sense (v being the characteristic velocity fluctuations at the scale ℓ), thus obtaining the scaling relation v ∼ (ℓ∕ρ )1∕3. Fluctuations of a density-weighted velocity field 1∕3 u ≡ ρ v should thus follow the usual Kolmogorov scaling 3 u ∼ ℓ. The same phenomenological arguments can be introduced in MHD turbulence Carbone et al. (2009aJump To The Next Citation Point) by considering the pseudoenergy dissipation rates per unit volume 𝜖±V = ρ𝜖±ii and introducing density-weighted Elsässer fields, defined as w ± ≡ ρ1∕3z±. A relation equivalent to the Yaglom-type relation (39View Equation)
± −1 ∓ ± 2 ± W ℓ ≡ ⟨ρ⟩ ⟨Δw ℓ |Δw i | ⟩ = C 𝜖iiℓ (46 )
(C is some constant assumed to be of the order of unit) should then hold for the density-weighted increments ± Δw. Relation ± W ℓ reduces to ± Yℓ in the case of constant density, allowing for comparison between the Yaglom’s law for incompressible MHD flows and their compressible counterpart. Despite its simple phenomenological derivation, the introduction of the density fluctuations in the Yaglom-type scaling (46View Equation) should describe the turbulent cascade for compressible fluid (or magnetofluid) turbulence. Even if the modified Yaglom’s law (46View Equation) is not an exact relation as (39View Equation), being obtained from phenomenological considerations, the law for the velocity field in a compressible fluid flow has been observed in numerical simulations, the value of the constant C results negative and of the order of unity (Padoan et al., 2007; Kowal and Lazarian, 2007).
2.13 Yaglom’s law in the shell model for MHD turbulence
As far as the shell model is concerned, the existence of a cascade towards small scales is expressed by an exact relation, which is equivalent to Equation (40View Equation). Using Equations (23View Equation), the scale-by-scale pseudo-energy budget is given by
d-∑ ± 2 [ ±] ∑ 2 ± 2 ∑ [ ± ±∗] dt |Zn | = knIm Tn − 2 νkn|Zn | + 2ℜe Zn Fn . n n n
The second and third terms on the right hand side represent, respectively, the rate of pseudo-energy dissipation and the rate of pseudo-energy injection. The first term represents the flux of pseudo-energy along the wave vectors, responsible for the redistribution of pseudo-energies on the wave vectors, and is given by
( ) ± ± ± ∓ 2 − a − c ± ± ∓ Tn = (a + c)Z n Z n+1Zn+2 + ----λ---- Z n−1Z n+1Zn + ( ) ± ± ∓ c-−-a ± ± ∓ + (2 − a − c)Z n Z n+2Zn+1 + λ Z )Zn Zn+1Z n−1. (47 )
Using the same assumptions as before, namely: i) the forcing terms act only on the largest scales, ii) the system can reach a statistically stationary state, and iii) in the limit of fully developed turbulence, ν → 0, the mean pseudo-energy dissipation rates tend to finite positive limits 𝜖±, it can be found that
± ± −1 ⟨Tn ⟩ = − 𝜖 kn . (48 )
This is an exact relation which is valid in the inertial range of turbulence. Even in this case it can be used as an operative definition of the inertial range in the shell model, that is, the inertial range of the energy cascade in the shell model is defined as the range of scales kn, where the law from Equation (48View Equation) is verified.
Go to previous page Scroll to top Go to next page |
d3e3e63c5cf94ed3 |
In 1916, Albert Einstein put the finishing touches on his Theory of General Relativity, a journey that began in 1905 with his attempts to reconcile Newton’s own theories of gravitation with the laws of electromagnetism. Once complete, Einstein’s theory provided a unified description of gravity as a geometric property of the cosmos, where massive objects alter the curvature of spacetime, affecting everything around them.
What’s more, Einstein’s field equations predicted the existence of black holes, objects so massive that even light cannot escape their surfaces. GR also predicts that black holes will bend light in their vicinity, an effect that can be used by astronomers to observe more distant objects. Relying on this technique, an international team of scientists made an unprecedented feat by observing light caused by an X-ray flare that took place behind a black hole.
Continue reading “A Black Hole Emitted a Flare Away From us, but its Intense Gravity Redirected the Blast Back in our Direction”
Pulsars Confirm One of Einstein’s Best Ideas, That Freefall Really Feels Like You’re Experiencing a Lack of Gravity
Six and a half decades after he passed away, famed theoretical physicist Albert Einstein is still being proven right! In addition to General Relativity (GR) being tested under the most extreme conditions, lesser-known aspects of his theories are still being validated as well. For example, GR predicts that gravity and inertia are often indistinguishable, in what is known as the gravitational Strong Equivalence Principle (SEP).
Thanks to an international team of researchers, it has been proven under the strongest conditions to date. By precisely tracking the motion of a pulsar, the team demonstrated that gravity causes neutron stars and white dwarf stars to fall with equal accelerations. This confirms Einstein’s prediction that freefall accurately simulates zero-gravity conditions in all inertial reference frames.
Continue reading “Pulsars Confirm One of Einstein’s Best Ideas, That Freefall Really Feels Like You’re Experiencing a Lack of Gravity”
First Ever Image of Quantum Entanglement
During the 1930s, venerable theoretical physicist Albert Einstein returned to the field of quantum mechanics, which his theories of relativity helped to create. Hoping to develop a more complete theory of how particles behave, Einstein was instead horrified by the prospect of quantum entanglement – something he described as “spooky action at a distance”.
Despite Einstein’s misgivings, quantum entanglement has gone on to become an accepted part of quantum mechanics. And now, for the first time ever, a team of physicists from the University of Glasgow took an image of a form of quantum entanglement (aka. Bell entanglement) at work. In so doing, they managed to capture the first piece of visual evidence of a phenomenon that baffled even Einstein himself.
Continue reading “First Ever Image of Quantum Entanglement”
You Could Travel Through a Wormhole, but it’s Slower Than Going Through Space
Artist illustration of a spacecraft passing through a wormhole to a distant galaxy. Image credit: NASA.
Special Relativity. It’s been the bane of space explorers, futurists and science fiction authors since Albert Einstein first proposed it in 1905. For those of us who dream of humans one-day becoming an interstellar species, this scientific fact is like a wet blanket. Luckily, there are a few theoretical concepts that have been proposed that indicate that Faster-Than-Light (FTL) travel might still be possible someday.
A popular example is the idea of a wormhole: a speculative structure that links two distant points in space time that would enable interstellar space travel. Recently, a team of Ivy League scientists conducted a study that indicated how “traversable wormholes” could actually be a reality. The bad news is that their results indicate that these wormholes aren’t exactly shortcuts, and could be the cosmic equivalent of “taking the long way”!
Continue reading “You Could Travel Through a Wormhole, but it’s Slower Than Going Through Space”
Who was Max Planck?
Imagine if you will that your name would forever be associated with a groundbreaking scientific theory. Imagine also that your name would even be attached to a series of units, designed to performs measurements for complex equations. Now imagine that you were German who lived through two World Wars, won the Nobel Prize for physics, and outlived many of your children.
If you can do all that, then you might know what it was like to be Max Planck, the German physicist and founder of quantum theory. Much like Galileo, Newton, and Einstein, Max Planck is regarded as one of the most influential and groundbreaking scientists of his time, a man whose discoveries helped to revolutionized the field of physics. Ironic, considering that when he first embarked on his career, he was told there was nothing new to be discovered!
Early Life and Education:
Born in 1858 in Kiel, Germany, Planck was a child of intellectuals, his grandfather and great-grandfather both theology professors and his father a professor of law, and his uncle a judge. In 1867, his family moved to Munich, where Planck enrolled in the Maximilians gymnasium school. From an early age, Planck demonstrated an aptitude for mathematics, astronomy, mechanics, and music.
Illustration of Friedrich Wilhelms University, with the statue of Frederick the Great (ca. 1850). Credit: Wikipedia Commons/A. Carse
He graduated early, at the age of 17, and went on to study theoretical physics at the University of Munich. In 1877, he went on to Friedrich Wilhelms University in Berlin to study with physicists Hermann von Helmholtz. Helmholtz had a profound influence on Planck, who he became close friends with, and eventually Planck decided to adopt thermodynamics as his field of research.
In October 1878, he passed his qualifying exams and defended his dissertation in February of 1879 – titled “On the second law of thermodynamics”. In this work, he made the following statement, from which the modern Second Law of Thermodynamics is believed to be derived: “It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the raising of a weight and cooling of a heat reservoir.”
For a time, Planck toiled away in relative anonymity because of his work with entropy (which was considered a dead field). However, he made several important discoveries in this time that would allow him to grow his reputation and gain a following. For instance, his Treatise on Thermodynamics, which was published in 1897, contained the seeds of ideas that would go on to become highly influential – i.e. black body radiation and special states of equilibrium.
With the completion of his thesis, Planck became an unpaid private lecturer at the Freidrich Wilhelms University in Munich and joined the local Physical Society. Although the academic community did not pay much attention to him, he continued his work on heat theory and came to independently discover the same theory of thermodynamics and entropy as Josiah Willard Gibbs – the American physicist who is credited with the discovery.
Professors Michael Bonitz and Frank Hohmann, holding a facsimile of Planck’s Nobel prize certificate, which was given to the University of Kiel in 2013. Credit and Copyright: CAU/Schimmelpfennig
In 1885, the University of Kiel appointed Planck as an associate professor of theoretical physics, where he continued his studies in physical chemistry and heat systems. By 1889, he returned to Freidrich Wilhelms University in Berlin, becoming a full professor by 1892. He would remain in Berlin until his retired in January 1926, when he was succeeded by Erwin Schrodinger.
Black Body Radiation:
It was in 1894, when he was under a commission from the electric companies to develop better light bulbs, that Planck began working on the problem of black-body radiation. Physicists were already struggling to explain how the intensity of the electromagnetic radiation emitted by a perfect absorber (i.e. a black body) depended on the bodies temperature and the frequency of the radiation (i.e., the color of the light).
In time, he resolved this problem by suggesting that electromagnetic energy did not flow in a constant form but rather in discreet packets, i.e. quanta. This came to be known as the Planck postulate, which can be stated mathematically as E = hv – where E is energy, v is the frequency, and h is the Planck constant. This theory, which was not consistent with classical Newtonian mechanics, helped to trigger a revolution in science.
A deeply conservative scientists who was suspicious of the implications his theory raised, Planck indicated that he only came by his discovery reluctantly and hoped they would be proven wrong. However, the discovery of Planck’s constant would prove to have a revolutionary impact, causing scientists to break with classical physics, and leading to the creation of Planck units (length, time, mass, etc.).
From left to right: W. Nernst, A. Einstein, M. Planck, R.A. Millikan and von Laue at a dinner given by von Laue in 1931. Credit: Wikipedia Commons
From left to right: W. Nernst, A. Einstein, M. Planck, R.A. Millikan and von Laue at a dinner given by von Laue in Berlin, 1931. Credit: Wikipedia Commons
Quantum Mechanics:
By the turn of the century another influential scientist by the name of Albert Einstein made several discoveries that would prove Planck’s quantum theory to be correct. The first was his theory of photons (as part of his Special Theory of Relativity) which contradicted classical physics and the theory of electrodynamics that held that light was a wave that needed a medium to propagate.
The second was Einstein’s study of the anomalous behavior of specific bodies when heated at low temperatures, another example of a phenomenon which defied classical physics. Though Planck was one of the first to recognize the significance of Einstein’s special relativity, he initially rejected the idea that light could made up of discreet quanta of matter (in this case, photons).
However, in 1911, Planck and Walther Nernst (a colleague of Planck’s) organized a conference in Brussels known as the First Solvav Conference, the subject of which was the theory of radiation and quanta. Einstein attended, and was able to convince Planck of his theories regarding specific bodies during the course of the proceedings. The two became friends and colleagues; and in 1914, Planck created a professorship for Einstein at the University of Berlin.
During the 1920s, a new theory of quantum mechanics had emerged, which was known as the “Copenhagen interpretation“. This theory, which was largely devised by German physicists Neils Bohr and Werner Heisenberg, stated that quantum mechanics can only predict probabilities; and that in general, physical systems do not have definite properties prior to being measured.
Photograph of the first Solvay Conference in 1911 at the Hotel Metropole in Brussels, Belgium. Credit: International Solvay Institutes/Benjamin Couprie
This was rejected by Planck, however, who felt that wave mechanics would soon render quantum theory unnecessary. He was joined by his colleagues Erwin Schrodinger, Max von Laue, and Einstein – all of whom wanted to save classical mechanics from the “chaos” of quantum theory. However, time would prove that both interpretations were correct (and mathematically equivalent), giving rise to theories of particle-wave duality.
World War I and World War II:
In 1914, Planck joined in the nationalistic fervor that was sweeping Germany. While not an extreme nationalist, he was a signatory of the now-infamous “Manifesto of the Ninety-Three“, a manifesto which endorsed the war and justified Germany’s participation. However, by 1915, Planck revoked parts of the Manifesto, and by 1916, he became an outspoken opponent of Germany’s annexation of other territories.
After the war, Planck was considered to be the German authority on physics, being the dean of Berlin Universit, a member of the Prussian Academy of Sciences and the German Physical Society, and president of the Kaiser Wilhelm Society (KWS, now the Max Planck Society). During the turbulent years of the 1920s, Planck used his position to raise funds for scientific research, which was often in short supply.
The Nazi seizure of power in 1933 resulted in tremendous hardship, some of which Planck personally bore witness to. This included many of his Jewish friends and colleagues being expelled from their positions and humiliated, and a large exodus of Germans scientists and academics.
Entrance of the administrative headquarters of the Max Planck Society in Munich. Credit: Wikipedia Commons/Maximilian Dörrbecker
Planck attempted to persevere in these years and remain out of politics, but was forced to step in to defend colleagues when threatened. In 1936, he resigned his positions as head of the KWS due to his continued support of Jewish colleagues in the Society. In 1938, he resigned as president of the Prussian Academy of Sciences due to the Nazi Party assuming control of it.
Despite these evens and the hardships brought by the war and the Allied bombing campaign, Planck and his family remained in Germany. In 1945, Planck’s son Erwin was arrested due to the attempted assassination of Hitler in the July 20th plot, for which he was executed by the Gestapo. This event caused Planck to descend into a depression from which he did not recover before his death.
Death and Legacy:
Planck died on October 4th, 1947 in Gottingen, Germany at the age of 89. He was survived by his second wife, Marga von Hoesslin, and his youngest son Hermann. Though he had been forced to resign his key positions in his later years, and spent the last few years of his life haunted by the death of his eldest son, Planck left a remarkable legacy in his wake.
In recognition for his fundamental contribution to a new branch of physics he was awarded the Nobel Prize in Physics in 1918. He was also elected to the Foreign Membership of the Royal Society in 1926, being awarded the Society’s Copley Medal in 1928. In 1909, he was invited to become the Ernest Kempton Adams Lecturer in Theoretical Physics at Columbia University in New York City.
The Max Planck Medal, issued by the German Physical Society in recognition of scientific contributions. Credit: dpg-physik.de
He was also greatly respected by his colleagues and contemporaries and distinguished himself by being an integral part of the three scientific organizations that dominated the German sciences- the Prussian Academy of Sciences, the Kaiser Wilhelm Society, and the German Physical Society. The German Physical Society also created the Max Planck Medal, the first of which was awarded into 1929 to both Planck and Einstein.
The Max Planck Society was also created in the city of Gottingen in 1948 to honor his life and his achievements. This society grew in the ensuing decades, eventually absorbing the Kaiser Wilhelm Society and all its institutions. Today, the Society is recognized as being a leader in science and technology research and the foremost research organization in Europe, with 33 Nobel Prizes awarded to its scientists.
In 2009, the European Space Agency (ESA) deployed the Planck spacecraft, a space observatory which mapped the Cosmic Microwave Background (CMB) at microwave and infra-red frequencies. Between 2009 and 2013, it provided the most accurate measurements to date on the average density of ordinary matter and dark matter in the Universe, and helped resolve several questions about the early Universe and cosmic evolution.
Planck shall forever be remembered as one of the most influential scientists of the 20th century. Alongside men like Einstein, Schrodinger, Bohr, and Heisenberg (most of whom were his friends and colleagues), he helped to redefine our notions of physics and the nature of the Universe.
We have written many articles about Max Planck for Universe Today. Here’s What is Planck Time?, Planck’s First Light?, All-Sky Stunner from Planck, What is Schrodinger’s Cat?, What is the Double Slit Experiment?, and here’s a list of stories about the spacecraft that bears his name.
If you’d like more info on Max Planck, check out Max Planck’s biography from Science World and Space and Motion.
We’ve also recorded an entire episode of Astronomy Cast all about Max Planck. Listen here, Episode 218: Max Planck.
What is the Speed of Light?
Since ancient times, philosophers and scholars have sought to understand light. In addition to trying to discern its basic properties (i.e. what is it made of – particle or wave, etc.) they have also sought to make finite measurements of how fast it travels. Since the late-17th century, scientists have been doing just that, and with increasing accuracy.
In so doing, they have gained a better understanding of light’s mechanics and the important role it plays in physics, astronomy and cosmology. Put simply, light moves at incredible speeds and is the fastest moving thing in the Universe. Its speed is considered a constant and an unbreakable barrier, and is used as a means of measuring distance. But just how fast does it travel?
Speed of Light (c):
Light travels at a constant speed of 1,079,252,848.8 (1.07 billion) km per hour. That works out to 299,792,458 m/s, or about 670,616,629 mph (miles per hour). To put that in perspective, if you could travel at the speed of light, you would be able to circumnavigate the globe approximately seven and a half times in one second. Meanwhile, a person flying at an average speed of about 800 km/h (500 mph), would take over 50 hours to circle the planet just once.
Illustration showing the distance between Earth and the Sun. Credit: LucasVB/Public Domain
Illustration showing the distance light travels between the Earth and the Sun. Credit: LucasVB/Public Domain
To put that into an astronomical perspective, the average distance from the Earth to the Moon is 384,398.25 km (238,854 miles ). So light crosses that distance in about a second. Meanwhile, the average distance from the Sun to the Earth is ~149,597,886 km (92,955,817 miles), which means that light only takes about 8 minutes to make that journey.
Little wonder then why the speed of light is the metric used to determine astronomical distances. When we say a star like Proxima Centauri is 4.25 light years away, we are saying that it would take – traveling at a constant speed of 1.07 billion km per hour (670,616,629 mph) – about 4 years and 3 months to get there. But just how did we arrive at this highly specific measurement for “light-speed”?
History of Study:
Until the 17th century, scholars were unsure whether light traveled at a finite speed or instantaneously. From the days of the ancient Greeks to medieval Islamic scholars and scientists of the early modern period, the debate went back and forth. It was not until the work of Danish astronomer Øle Rømer (1644-1710) that the first quantitative measurement was made.
In 1676, Rømer observed that the periods of Jupiter’s innermost moon Io appeared to be shorter when the Earth was approaching Jupiter than when it was receding from it. From this, he concluded that light travels at a finite speed, and estimated that it takes about 22 minutes to cross the diameter of Earth’s orbit.
Prof. Albert Einstein uses the blackboard as he delivers the 11th Josiah Willard Gibbs lecture at the meeting of the American Association for the Advancement of Science in the auditorium of the Carnegie Institue of Technology Little Theater at Pittsburgh, Pa., on Dec. 28, 1934. Using three symbols, for matter, energy and the speed of light respectively, Einstein offers additional proof of a theorem propounded by him in 1905 that matter and energy are the same thing in different forms. (AP Photo)
Prof. Albert Einstein delivering the 11th Josiah Willard Gibbs lecture at the Carnegie Institute of Technology on Dec. 28th, 1934, where he expounded on his theory of how matter and energy are the same thing in different forms. Credit: AP Photo
Christiaan Huygens used this estimate and combined it with an estimate of the diameter of the Earth’s orbit to obtain an estimate of 220,000 km/s. Isaac Newton also spoke about Rømer’s calculations in his seminal work Opticks (1706). Adjusting for the distance between the Earth and the Sun, he calculated that it would take light seven or eight minutes to travel from one to the other. In both cases, they were off by a relatively small margin.
Later measurements made by French physicists Hippolyte Fizeau (1819 – 1896) and Léon Foucault (1819 – 1868) refined these measurements further – resulting in a value of 315,000 km/s (192,625 mi/s). And by the latter half of the 19th century, scientists became aware of the connection between light and electromagnetism.
This was accomplished by physicists measuring electromagnetic and electrostatic charges, who then found that the numerical value was very close to the speed of light (as measured by Fizeau). Based on his own work, which showed that electromagnetic waves propagate in empty space, German physicist Wilhelm Eduard Weber proposed that light was an electromagnetic wave.
The next great breakthrough came during the early 20th century/ In his 1905 paper, titled “On the Electrodynamics of Moving Bodies”, Albert Einstein asserted that the speed of light in a vacuum, measured by a non-accelerating observer, is the same in all inertial reference frames and independent of the motion of the source or observer.
A laser shining through a glass of water demonstrates how many changes in speed it undergoes - from 186,222 mph in air to 124,275 mph through the glass. It speeds up again to 140,430 mph in water, slows again through glass and then speeds up again when leaving the glass and continuing through the air. Credit: Bob King
A laser shining through a glass of water demonstrates how many changes in speed (in mph) it undergoes as it passes from air, to glass, to water, and back again. Credit: Bob King
Using this and Galileo’s principle of relativity as a basis, Einstein derived the Theory of Special Relativity, in which the speed of light in vacuum (c) was a fundamental constant. Prior to this, the working consensus among scientists held that space was filled with a “luminiferous aether” that was responsible for its propagation – i.e. that light traveling through a moving medium would be dragged along by the medium.
This in turn meant that the measured speed of the light would be a simple sum of its speed through the medium plus the speed of that medium. However, Einstein’s theory effectively made the concept of the stationary aether useless and revolutionized the concepts of space and time.
Not only did it advance the idea that the speed of light is the same in all inertial reference frames, it also introduced the idea that major changes occur when things move close the speed of light. These include the time-space frame of a moving body appearing to slow down and contract in the direction of motion when measured in the frame of the observer (i.e. time dilation, where time slows as the speed of light approaches).
His observations also reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, simplified the mathematical calculations by doing away with extraneous explanations used by other scientists, and accorded with the directly observed speed of light.
During the second half of the 20th century, increasingly accurate measurements using laser inferometers and cavity resonance techniques would further refine estimates of the speed of light. By 1972, a group at the US National Bureau of Standards in Boulder, Colorado, used the laser inferometer technique to get the currently-recognized value of 299,792,458 m/s.
Role in Modern Astrophysics:
Einstein’s theory that the speed of light in vacuum is independent of the motion of the source and the inertial reference frame of the observer has since been consistently confirmed by many experiments. It also sets an upper limit on the speeds at which all massless particles and waves (which includes light) can travel in a vacuum.
One of the outgrowths of this is that cosmologists now treat space and time as a single, unified structure known as spacetime – in which the speed of light can be used to define values for both (i.e. “lightyears”, “light minutes”, and “light seconds”). The measurement of the speed of light has also become a major factor when determining the rate of cosmic expansion.
Beginning in the 1920’s with observations of Lemaitre and Hubble, scientists and astronomers became aware that the Universe is expanding from a point of origin. Hubble also observed that the farther away a galaxy is, the faster it appears to be moving. In what is now referred to as the Hubble Parameter, the speed at which the Universe is expanding is calculated to 68 km/s per megaparsec.
This phenomena, which has been theorized to mean that some galaxies could actually be moving faster than the speed of light, may place a limit on what is observable in our Universe. Essentially, galaxies traveling faster than the speed of light would cross a “cosmological event horizon”, where they are no longer visible to us.
Also, by the 1990’s, redshift measurements of distant galaxies showed that the expansion of the Universe has been accelerating for the past few billion years. This has led to theories like “Dark Energy“, where an unseen force is driving the expansion of space itself instead of objects moving through it (thus not placing constraints on the speed of light or violating relativity).
Along with special and general relativity, the modern value of the speed of light in a vacuum has gone on to inform cosmology, quantum physics, and the Standard Model of particle physics. It remains a constant when talking about the upper limit at which massless particles can travel, and remains an unachievable barrier for particles that have mass.
Perhaps, someday, we will find a way to exceed the speed of light. While we have no practical ideas for how this might happen, the smart money seems to be on technologies that will allow us to circumvent the laws of spacetime, either by creating warp bubbles (aka. the Alcubierre Warp Drive), or tunneling through it (aka. wormholes).
Until that time, we will just have to be satisfied with the Universe we can see, and to stick to exploring the part of it that is reachable using conventional methods.
We have written many articles about the speed of light for Universe Today. Here’s How Fast is the Speed of Light?, How are Galaxies Moving Away Faster than Light?, How Can Space Travel Faster than the Speed of Light?, and Breaking the Speed of Light.
Here’s a cool calculator that lets you convert many different units for the speed of light, and here’s a relativity calculator, in case you wanted to travel nearly the speed of light.
Astronomy Cast also has an episode that addresses questions about the speed of light – Questions Show: Relativity, Relativity, and more Relativity.
A Star Is About To Go 2.5% The Speed Of Light Past A Black Hole
Since it was first discovered in 1974, astronomers have been dying to get a better look at the Supermassive Black Hole (SBH) at the center of our galaxy. Known as Sagittarius A*, scientists have only been able to gauge the position and mass of this SBH by measuring the effect it has on the stars that orbit it. But so far, more detailed observations have eluded them, thanks in part to all the gas and dust that obscures it.
Luckily, the European Southern Observatory (ESO) recently began work with the GRAVITY interferometer, the latest component in their Very Large Telescope (VLT). Using this instrument, which combines near-infrared imaging, adaptive-optics, and vastly improved resolution and accuracy, they have managed to capture images of the stars orbiting Sagittarius A*. And what they have observed was quite fascinating.
One of the primary purposes of GRAVITY is to study the gravitational field around Sagittarius A* in order to make precise measurements of the stars that orbit it. In so doing, the GRAVITY team – which consists of astronomers from the ESO, the Max Planck Institute, and multiple European research institutes – will be able to test Einstein’s theory of General Relativity like never before.
The core of the Milky Way. Credit: NASA/JPL-Caltech/S. Stolovy (SSC/Caltech)
Spitzer image of the core of the Milky Way Galaxy. Credit: NASA/JPL-Caltech/S. Stolovy (SSC/Caltech)
In what was the first observation conducted using the new instrument, the GRAVITY team used its powerful interferometric imaging capabilities to study S2, a faint star which orbits Sagittarius A* with a period of only 16 years. This test demonstrated the effectiveness of the GRAVITY instrument – which is 15 times more sensitive than the individual 8.2-metre Unit Telescopes the VLT currently relies on.
This was an historic accomplishment, as a clear view of the center of our galaxy is something that has eluded astronomers in the past. As GRAVITY’s lead scientist, Frank Eisenhauer – from the Max Planck Institute for Extraterrestrial Physics in Garching, Germany – explained to Universe Today via email:
“First, the Galactic Center is hidden behind a huge amount of interstellar dust, and it is practically invisible at optical wavelengths. The stars are only observable in the infrared, so we first had to develop the necessary technology and instruments for that. Second, there are so many stars concentrated in the Galactic Center that a normal telescope is not sharp enough to resolve them. It was only in the late 1990′ and in the beginning of this century when we learned to sharpen the images with the help of speckle interferometry and adaptive optics to see the stars and observe their dance around the central black hole.”
But more than that, the observation of S2 was very well timed. In 2018, the star will be at the closest point in its orbit to the Sagittarius A* – just 17 light-hours from it. As you can see from the video below, it is at this point that S2 will be moving much faster than at any other point in its orbit (the orbit of S2 is highlighted in red and the position of the central black hole is marked with a red cross).
When it makes its closest approach, S2 will accelerate to speeds of almost 30 million km per hour, which is 2.5% the speed of light. Another opportunity to view this star reach such high speeds will not come again for another 16 years – in 2034. And having shown just how sensitive the instrument is already, the GRAVITY team expects to be able make very precise measurements of the star’s position.
In fact, they anticipate that the level of accuracy will be comparable to that of measuring the positions of objects on the surface of the Moon, right down to the centimeter-scale. As such, they will be able to determine whether the motion of the star as it orbits the black hole are consistent with Einstein’s theories of general relativity.
“[I]t is not the speed itself to cause the general relativistic effects,” explained Eisenhauer, “but the strong gravitation around the black hole. But the very high orbital speed is a direct consequence and measure of the gravitation, so we refer to it in the press release because the comparison with the speed of light and the ISS illustrates so nicely the extreme conditions.
Artist's impression of the influence gravity has on space time. Credit: space.com
Artist’s impression of the influence gravity has on space-time. Credit: space.com
As recent simulations of the expansion of galaxies in the Universe have shown, Einstein’s theories are still holding up after many decades. However, these tests will offer hard evidence, obtained through direct observation. A star traveling at a portion of the speed of light around a supermassive black hole at the center of our galaxy will certainly prove to be a fitting test.
And Eisenhauer and his colleagues expect to see some very interesting things. “We hope to see a “kick” in the orbit.” he said. “The general relativistic effects increase very strongly when you approach the black hole, and when the star swings by, these effects will slightly change the direction of the
While those of us here at Earth will not be able to “star gaze” on this occasion and see R2 whipping past Sagittarius A*, we will still be privy to all the results. And then, we just might see if Einstein really was correct when he proposed what is still the predominant theory of gravitation in physics, over a century later.
Further Reading: eso.org
How Does Light Travel?
Ever since Democritus – a Greek philosopher who lived between the 5th and 4th century’s BCE – argued that all of existence was made up of tiny indivisible atoms, scientists have been speculating as to the true nature of light. Whereas scientists ventured back and forth between the notion that light was a particle or a wave until the modern era, the 20th century led to breakthroughs that showed us that it behaves as both.
These included the discovery of the electron, the development of quantum theory, and Einstein’s Theory of Relativity. However, there remains many unanswered questions about light, many of which arise from its dual nature. For instance, how is it that light can be apparently without mass, but still behave as a particle? And how can it behave like a wave and pass through a vacuum, when all other waves require a medium to propagate?
Theory of Light to the 19th Century:
During the Scientific Revolution, scientists began moving away from Aristotelian scientific theories that had been seen as accepted canon for centuries. This included rejecting Aristotle’s theory of light, which viewed it as being a disturbance in the air (one of his four “elements” that composed matter), and embracing the more mechanistic view that light was composed of indivisible atoms.
In many ways, this theory had been previewed by atomists of Classical Antiquity – such as Democritus and Lucretius – both of whom viewed light as a unit of matter given off by the sun. By the 17th century, several scientists emerged who accepted this view, stating that light was made up of discrete particles (or “corpuscles”). This included Pierre Gassendi, a contemporary of René Descartes, Thomas Hobbes, Robert Boyle, and most famously, Sir Isaac Newton.
The first edition of Newton's Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light (1704). Credit: Public Domain.
The first edition of Newton’s Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light (1704). Credit: Public Domain.
Newton’s corpuscular theory was an elaboration of his view of reality as an interaction of material points through forces. This theory would remain the accepted scientific view for more than 100 years, the principles of which were explained in his 1704 treatise “Opticks, or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light“. According to Newton, the principles of light could be summed as follows:
• Every source of light emits large numbers of tiny particles known as corpuscles in a medium surrounding the source.
• These corpuscles are perfectly elastic, rigid, and weightless.
This represented a challenge to “wave theory”, which had been advocated by 17th century Dutch astronomer Christiaan Huygens. . These theories were first communicated in 1678 to the Paris Academy of Sciences and were published in 1690 in his Traité de la lumière (“Treatise on Light“). In it, he argued a revised version of Descartes views, in which the speed of light is infinite and propagated by means of spherical waves emitted along the wave front.
Double-Slit Experiment:
By the early 19th century, scientists began to break with corpuscular theory. This was due in part to the fact that corpuscular theory failed to adequately explain the diffraction, interference and polarization of light, but was also because of various experiments that seemed to confirm the still-competing view that light behaved as a wave.
The most famous of these was arguably the Double-Slit Experiment, which was originally conducted by English polymath Thomas Young in 1801 (though Sir Isaac Newton is believed to have conducted something similar in his own time). In Young’s version of the experiment, he used a slip of paper with slits cut into it, and then pointed a light source at them to measure how light passed through it.
According to classical (i.e. Newtonian) particle theory, the results of the experiment should have corresponded to the slits, the impacts on the screen appearing in two vertical lines. Instead, the results showed that the coherent beams of light were interfering, creating a pattern of bright and dark bands on the screen. This contradicted classical particle theory, in which particles do not interfere with each other, but merely collide.
The only possible explanation for this pattern of interference was that the light beams were in fact behaving as waves. Thus, this experiment dispelled the notion that light consisted of corpuscles and played a vital part in the acceptance of the wave theory of light. However subsequent research, involving the discovery of the electron and electromagnetic radiation, would lead to scientists considering yet again that light behaved as a particle too, thus giving rise to wave-particle duality theory.
Electromagnetism and Special Relativity:
Prior to the 19th and 20th centuries, the speed of light had already been determined. The first recorded measurements were performed by Danish astronomer Ole Rømer, who demonstrated in 1676 using light measurements from Jupiter’s moon Io to show that light travels at a finite speed (rather than instantaneously).
Prof. Albert Einstein delivering the 11th Josiah Willard Gibbs lecture at the meeting of the American Association for the Advancement of Science on Dec. 28th, 1934. Credit: AP Photo
By the late 19th century, James Clerk Maxwell proposed that light was an electromagnetic wave, and devised several equations (known as Maxwell’s equations) to describe how electric and magnetic fields are generated and altered by each other and by charges and currents. By conducting measurements of different types of radiation (magnetic fields, ultraviolet and infrared radiation), he was able to calculate the speed of light in a vacuum (represented as c).
In 1905, Albert Einstein published “On the Electrodynamics of Moving Bodies”, in which he advanced one of his most famous theories and overturned centuries of accepted notions and orthodoxies. In his paper, he postulated that the speed of light was the same in all inertial reference frames, regardless of the motion of the light source or the position of the observer.
Exploring the consequences of this theory is what led him to propose his theory of Special Relativity, which reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, simplified the mathematical calculations, and accorded with the directly observed speed of light and accounted for the observed aberrations. It also demonstrated that the speed of light had relevance outside the context of light and electromagnetism.
For one, it introduced the idea that major changes occur when things move close the speed of light, including the time-space frame of a moving body appearing to slow down and contract in the direction of motion when measured in the frame of the observer. After centuries of increasingly precise measurements, the speed of light was determined to be 299,792,458 m/s in 1975.
Einstein and the Photon:
In 1905, Einstein also helped to resolve a great deal of confusion surrounding the behavior of electromagnetic radiation when he proposed that electrons are emitted from atoms when they absorb energy from light. Known as the photoelectric effect, Einstein based his idea on Planck’s earlier work with “black bodies” – materials that absorb electromagnetic energy instead of reflecting it (i.e. white bodies).
At the time, Einstein’s photoelectric effect was attempt to explain the “black body problem”, in which a black body emits electromagnetic radiation due to the object’s heat. This was a persistent problem in the world of physics, arising from the discovery of the electron, which had only happened eight years previous (thanks to British physicists led by J.J. Thompson and experiments using cathode ray tubes).
At the time, scientists still believed that electromagnetic energy behaved as a wave, and were therefore hoping to be able to explain it in terms of classical physics. Einstein’s explanation represented a break with this, asserting that electromagnetic radiation behaved in ways that were consistent with a particle – a quantized form of light which he named “photons”. For this discovery, Einstein was awarded the Nobel Prize in 1921.
Wave-Particle Duality:
Subsequent theories on the behavior of light would further refine this idea, which included French physicist Louis-Victor de Broglie calculating the wavelength at which light functioned. This was followed by Heisenberg’s “uncertainty principle” (which stated that measuring the position of a photon accurately would disturb measurements of it momentum and vice versa), and Schrödinger’s paradox that claimed that all particles have a “wave function”.
In accordance with quantum mechanical explanation, Schrodinger proposed that all the information about a particle (in this case, a photon) is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. At some location, the measurement of the wave function will randomly “collapse”, or rather “decohere”, to a sharply peaked function. This was illustrated in Schrödinger famous paradox involving a closed box, a cat, and a vial of poison (known as the “Schrödinger Cat” paradox).
In this illustration, one photon (purple) carries a million times the energy of another (yellow). Some theorists predict travel delays for higher-energy photons, which interact more strongly with the proposed frothy nature of space-time. Yet Fermi data on two photons from a gamma-ray burst fail to show this effect. The animation below shows the delay scientists had expected to observe. Credit: NASA/Sonoma State University/Aurore Simonnet
Artist’s impression of two photons travelling at different wavelengths, resulting in different- colored light. Credit: NASA/Sonoma State University/Aurore Simonnet
According to his theory, wave function also evolves according to a differential equation (aka. the Schrödinger equation). For particles with mass, this equation has solutions; but for particles with no mass, no solution existed. Further experiments involving the Double-Slit Experiment confirmed the dual nature of photons. where measuring devices were incorporated to observe the photons as they passed through the slits.
When this was done, the photons appeared in the form of particles and their impacts on the screen corresponded to the slits – tiny particle-sized spots distributed in straight vertical lines. By placing an observation device in place, the wave function of the photons collapsed and the light behaved as classical particles once more. As predicted by Schrödinger, this could only be resolved by claiming that light has a wave function, and that observing it causes the range of behavioral possibilities to collapse to the point where its behavior becomes predictable.
The development of Quantum Field Theory (QFT) was devised in the following decades to resolve much of the ambiguity around wave-particle duality. And in time, this theory was shown to apply to other particles and fundamental forces of interaction (such as weak and strong nuclear forces). Today, photons are part of the Standard Model of particle physics, where they are classified as boson – a class of subatomic particles that are force carriers and have no mass.
So how does light travel? Basically, traveling at incredible speeds (299 792 458 m/s) and at different wavelengths, depending on its energy. It also behaves as both a wave and a particle, able to propagate through mediums (like air and water) as well as space. It has no mass, but can still be absorbed, reflected, or refracted if it comes in contact with a medium. And in the end, the only thing that can truly divert it, or arrest it, is gravity (i.e. a black hole).
What we have learned about light and electromagnetism has been intrinsic to the revolution which took place in physics in the early 20th century, a revolution that we have been grappling with ever since. Thanks to the efforts of scientists like Maxwell, Planck, Einstein, Heisenberg and Schrodinger, we have learned much, but still have much to learn.
For instance, its interaction with gravity (along with weak and strong nuclear forces) remains a mystery. Unlocking this, and thus discovering a Theory of Everything (ToE) is something astronomers and physicists look forward to. Someday, we just might have it all figured out!
We have written many articles about light here at Universe Today. For example, here’s How Fast is the Speed of Light?, How Far is a Light Year?, What is Einstein’s Theory of Relativity?
If you’d like more info on light, check out these articles from The Physics Hypertextbook and NASA’s Mission Science page.
We’ve also recorded an entire episode of Astronomy Cast all about Interstellar Travel. Listen here, Episode 145: Interstellar Travel.
Japanese 3D Galaxy Map Confirms Einstein Was One Smart Dude
On June 30th, 1905, Albert Einstein started a revolution with the publication of theory of Special Relativity. This theory, among other things, stated that the speed of light in a vacuum is the same for all observers, regardless of the source. In 1915, he followed this up with the publication of his theory of General Relativity, which asserted that gravity has a warping effect on space-time. For over a century, these theories have been an essential tool in astrophysics, explaining the behavior of the Universe on the large scale.
However, since the 1990s, astronomers have been aware of the fact that the Universe is expanding at an accelerated rate. In an effort to explain the mechanics behind this, suggestions have ranged from the possible existence of an invisible energy (i.e. Dark Energy) to the possibility that Einstein’s field equations of General Relativity could be breaking down. But thanks to the recent work of an international research team, it is now known that Einstein had it right all along.
Continue reading “Japanese 3D Galaxy Map Confirms Einstein Was One Smart Dude”
Who Discovered Gravity?
Four fundamental forces govern all interactions within the Universe. They are weak nuclear forces, strong nuclear forces, electromagnetism, and gravity. Of these, gravity is perhaps the most mysterious. While it has been understood for some time how this law of physics operates on the macro-scale – governing our Solar System, galaxies, and superclusters – how it interacts with the three other fundamental forces remains a mystery.
Naturally, human beings have had a basic understanding of this force since time immemorial. And when it comes to our modern understanding of gravity, credit is owed to one man who deciphered its properties and how it governs all things great and small – Sir Isaac Newton. Thanks to this 17th century English physicist and mathematician, our understanding of the Universe and the laws that govern it would forever be changed.
While we are all familiar with the iconic image of a man sitting beneath an apple tree and having one fall on his head, Newton’s theories on gravity also represented a culmination of years worth of research, which in turn was based on centuries of accumulated knowledge. He would present these theories in his magnum opus, Philosophiae Naturalis Principia Mathematica (“Mathematical Principles of Natural Philosophy”), which was first published in 1687.
In this volume, Newton laid out what would come to be known as his Three Laws of Motion, which were derived from Johannes Kepler’s Laws of Planetary Motion and his own mathematical description of gravity. These laws would lay the foundation of classical mechanics, and would remain unchallenged for centuries – until the 20th century and the emergence of Einstein’s Theory of Relativity.
Newton's own copy of his Principia, with hand-written corrections for the second edition. Credit: Trinity Cambridge/Andrew Dunn
Physics by 17th Century:
The 17th century was a very auspicious time for the sciences, with major breakthroughs occurring in the fields of mathematics, physics, astronomy, biology and chemistry. Some of the greatest developments in the period include the development of the heliocentric model of the Solar System by Nicolaus Copernicus, the pioneering work with telescopes and observational astronomy by Galileo Galilei, and the development of modern optics.
It was also during this period that Johannes Kepler developed his Laws of Planetary Motion. Formulated between 1609 and 1619, these laws described the motion of the then-known planets (Mercury, Venus, Earth, Mars, Jupiter, and Saturn) around the Sun. They stated that:
• Planets move around the Sun in ellipses, with the Sun at one focus
• The line connecting the Sun to a planet sweeps equal areas in equal times.
• The square of the orbital period of a planet is proportional to the cube (3rd power) of the mean distance from the Sun in (or in other words–of the”semi-major axis” of the ellipse, half the sum of smallest and greatest distance from the Sun).
These laws resolved the remaining mathematical issues raised by Copernicus’ heliocentric model, thus removing all doubt that it was the correct model of the Universe. Working from these, Sir Isaac Newton began considering gravitation and its effect on the orbits of planets.
Newton’s Three Laws:
In 1678, Newton suffered a complete nervous breakdown due to overwork and a feud with fellow astronomer Robert Hooke. For the next few years, he withdrew from correspondence with other scientists, except where they initiated it, and renewed his interest in mechanics and astronomy. In the winter of 1680-81, the appearance of a comet, about which he corresponded with John Flamsteed (England’s Astronomer Royal) also renewed his interest in astronomy.
After reviewing Kepler’s Laws of Motion, Newton developed a mathematical proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector. Newton communicated these results to Edmond Halley (discoverer of “Haley’s Comet”) and to the Royal Society in his De motu corporum in gyrum.
This tract, published in 1684, contained the seed of what Newton would expand to form his magnum opus, the Philosophiae Naturalis Principia Mathematica. This treatise, which was published in July of 1687, contained Newton’s three laws of motion, which stated that:
• The vector sum of the external forces (F) on an object is equal to the mass (m) of that object multiplied by the acceleration vector (a) of the object. In mathematical form, this is expressed as: F=ma
Together, these laws described the relationship between any object, the forces acting upon it and the resulting motion, laying the foundation for classical mechanics. The laws also allowed Newton to calculate the mass of each planet, the flattening of the Earth at the poles, and the bulge at the equator, and how the gravitational pull of the Sun and Moon create the Earth’s tides.
In the same work, Newton presented a calculus-like method of geometrical analysis using ‘first and last ratios’, worked out the speed of sound in air (based on Boyle’s Law), accounted for the procession of the equinoxes (which he showed were a result of the Moon’s gravitational attraction to the Earth), initiated the gravitational study of the irregularities in the motion of the moon, provided a theory for the determination of the orbits of comets, and much more.
Newton and the “Apple Incident”:
The story of Newton coming up with his theory of universal gravitation as a result of an apple falling on his head has become a staple of popular culture. And while it has often been argued that the story is apocryphal and Newton did not devise his theory at any one moment, Newton himself told the story many times and claimed that the incident had inspired him.
In addition, the writing’s of William Stukeley – an English clergyman, antiquarian and fellow member of the Royal Society – have confirmed the story. But rather than the comical representation of the apple striking Newton on the head, Stukeley described in his Memoirs of Sir Isaac Newton’s Life (1752) a conversation in which Newton described pondering the nature of gravity while watching an apple fall.
“…we went into the garden, & drank thea under the shade of some appletrees; only he, & my self. amidst other discourse, he told me, he was just in the same situation, as when formerly, the notion of gravitation came into his mind. “why should that apple always descend perpendicularly to the ground,” thought he to himself; occasion’d by the fall of an apple…”
John Conduitt, Newton’s assistant at the Royal Mint (who eventually married his niece), also described hearing the story in his own account of Newton’s life. According to Conduitt, the incident took place in 1666 when Newton was traveling to meet his mother in Lincolnshire. While meandering in the garden, he contemplated how gravity’s influence extended far beyond Earth, responsible for the falling of apple as well as the Moon’s orbit.
Similarly, Voltaire wrote n his Essay on Epic Poetry (1727) that Newton had first thought of the system of gravitation while walking in his garden and watching an apple fall from a tree. This is consistent with Newton’s notes from the 1660s, which show that he was grappling with the idea of how terrestrial gravity extends, in an inverse-square proportion, to the Moon.
However, it would take him two more decades to fully develop his theories to the point that he was able to offer mathematical proofs, as demonstrated in the Principia. Once that was complete, he deduced that the same force that makes an object fall to the ground was responsible for other orbital motions. Hence, he named it “universal gravitation”.
Various trees are claimed to be “the” apple tree which Newton describes. The King’s School, Grantham, claims their school purchased the original tree, uprooted it, and transported it to the headmaster’s garden some years later. However, the National Trust, which holds the Woolsthorpe Manor (where Newton grew up) in trust, claims that the tree still resides in their garden. A descendant of the original tree can be seen growing outside the main gate of Trinity College, Cambridge, below the room Newton lived in when he studied there.
Newton’s work would have a profound effect on the sciences, with its principles remaining canon for the following 200 years. It also informed the concept of universal gravitation, which became the mainstay of modern astronomy, and would not be revised until the 20th century – with the discovery of quantum mechanics and Einstein’s theory of General Relativity.
We have written many interesting articles about gravity here at Universe Today. Here is Who was Sir Isaac Newton?, Who Was Galileo Galilei?, What Is the Force of Gravity?, and What is the Gravitational Constant?
Astronomy Cast has some two good episodes on the subject. Here’s Episode 37: Gravitational Lensing, and Episode 102: Gravity, |
7da554581baac975 | A note on optimal H^1-error estimates for Crank-Nicolson approximations to the nonlinear Schrödinger equation
by Patrick Henning, et al.
In this paper we consider a mass- and energy--conserving Crank-Nicolson time discretization for a general class of nonlinear Schrödinger equations. This scheme, which enjoys popularity in the physics community due to its conservation properties, was already subject to several analytical and numerical studies. However, a proof of optimal L^∞(H^1)-error estimates is still open, both in the semi-discrete Hilbert space setting, as well as in fully-discrete finite element settings. This paper aims at closing this gap in the literature.
There are no comments yet.
page 1
page 2
page 3
page 4
High-order mass- and energy-conserving SAV-Gauss collocation finite element methods for the nonlinear Schrödinger equation
A family of arbitrarily high-order fully discrete space-time finite elem...
Unconditionally optimal convergence of an energy-conserving and linearly implicit scheme for nonlinear wave equations
In this paper, we present and analyze an energy-conserving and linearly ...
Discrete conservation laws for finite element discretisations of multisymplectic PDEs
In this work we propose a new, arbitrary order space-time finite element...
Finite Element Approximations of a Class of Nonlinear Stochastic Wave Equation with Multiplicative Noise
Wave propagation problems have many applications in physics and engineer...
Unconditional energy dissipation and error estimates of the SAV Fourier spectral method for nonlinear fractional generalized wave equation
In this paper, we consider a second-order scalar auxiliary variable (SAV...
Carleman estimates and controllability results for fully-discrete approximations of 1-D parabolic equations
In this paper, we prove a Carleman estimate for fully-discrete approxima...
This week in AI
1 Introduction
In this paper we consider nonlinear Schrödinger equations (NLS) seeking a complex function such that
in a bounded domain , with a homogenous Dirichlet boundary condition on and a given initial value. Here, is a known real-valued potential and is a smooth (and possibly nonlinear) function that depends on the unknown density . Of particular interest are cubic nonlinearities of the form , for some . In this case, the equation is called Gross–Pitaevskii equation. It has applications in optics [1, 13], fluid dynamics [30, 31] and, most importantly, in quantum physics, where it models for example the dynamics of Bose-Einstein condensates in a magnetic trapping potential [12, 20, 23]. Another relevant class are saturated nonlinearities, such as for some , which appear in the context of nonlinear optical wave propagation in layered metallic structures [11, 17] or the propagation of light beams in plasmas [22].
Nonlinear Schrödinger equations come with important physical invariants, where the mass and the energy are considered as two of the most crucial ones. When solving a NLS numerically it is therefore of great importance to also reproduce this conservation on the discrete level. This aspect was emphasized by various numerical studies [16, 25].
For the subclass of power law nonlinearities of the form for and , a mass and energy conserving relaxation scheme was proposed and analyzed by Besse [7, 8]. Thanks to its properties, the scheme shows a very good performance in realistic physical setups [16]. Despite the large variety of different numerical approaches for solving the time-dependent NLS (cf. [2, 3, 5, 14, 18, 19, 21, 24, 27, 26, 28, 29, 32] and the references therein) the literature knows however only one time discretization that conserves both mass and energy simultaneously for arbitrary (smooth) nonlinearities. This discretization, which was first mathematically studied by Sanz-Serna [24] and which is long-known in the physics community, is a Crank-Nicolson-type approach where the nonlinearity is approximated by a suitable difference quotient involving the primitive integral of . This is also the time discretization that we shall consider in this paper.
A combination of the method with a finite difference space discretization was proposed and analyzed by Bao and Cai [4, 6]. Combining the Crank–Nicolson time discretization with a finite element discretization in space, the first a priori error estimates for the arising method were obtained by Sanz-Serna 1984 [24] for cubic nonlinearities. He considers the case and derives optimal -error estimates under the coupling constraint , where denotes the time step size of the Crank-Nicolson method and the mesh size of the finite element discretization in space. In 1991, Akrivis et al. [2] improved this result by showing optimal convergence rates in in dimension and under the relaxed coupling constraint . Finally, in 2017 [15], the -error estimates could be improved yet another time by showing that the coupling constraint can be fully removed. Furthermore, general nonlinearities could be considered and the influence of potentials could be taken into account. However, so far, optimal error estimates in are still open in the literature.
One reason for this absence of -results could be related to the techniques used for the error analysis in previous works (cf. [2, 14, 18, 19, 24, 32]) which is based on the following steps: 1. Appropriate truncation of the nonlinearity to obtain a problem with bounded growth. 2. Analyzing the scheme with truncation in the FE space and deriving corresponding - and/or -error estimates. 3. Using inverse estimates in the finite element space to show that the truncated approximations are uniformly bounded in by a term of the form , with appropriate powers that depend on the considered space discretization, regularity and space dimension. 4. Concluding that if and are coupled in an appropriate way, then the truncated approximations are all uniformly bounded by a constant and hence coincide with a solution to the scheme without truncation.
This strategy does not only have the disadvantage that it produces unnecessary coupling conditions, but also that it becomes impractically technical when considering -error estimates for the Crank–Nicolson FEM. This is because it requires a suitable truncation of the primitive integral of that is on the one hand consistent with the energy conservation and on the other hand allows for uniform bounds of the approximations in . However, thanks to the new techniques developed in [29] and the CN error analysis suggested in [15] in the context of -error estimates, the truncation step is no longer necessary and the desired -bounds can be derived with elliptic regularity theory. With this, it is now possible to obtain estimates in a direct way, not only in the finite element setting, but also in the semi-discrete Hilbert space setting.
In this paper we will therefore build upon the results from [15, 29] to fill the gap in the literature and prove optimal -error estimates for the Crank–Nicolson approach without coupling constraints and for a general class of nonlinearities. The paper is structured as follows. In Section 2 we present the notation and the analytical assumptions on the problem. In Section 3 we present the time–discrete Crank–Nicolson method, we recall its well-posedness and optimal error estimates in . Furthermore, we present and prove the new error estimate in . The paper concludes with the fully-discrete setting presented in Section 4, where the time discretization is combined with a finite element discretization in space. We recall what is known about this discretization and finally prove corresponding -error estimates, which is the main result of this paper.
2 Notation and Assumptions
We start with introducing the analytical setting of this work. Throughout the paper we assume that (for ) is a convex bounded domain with polyhedral boundary. On , the Sobolev space of complex-valued, weakly differentiable functions with a zero trace on and -integratable partial derivatives is as usual denoted by . The potential is assumed to be real and nonnegative. Indirectly, we also assume that is sufficiently smooth so that it is compatible with the regularity assumptions for listed below (see [15] for a discussion on this aspect). The (possibly nonlinear) function
is assumed to be smooth, fulfills and its growth can be characterized with
where is a function with the following growth properties
Note that in [15] the admissible growth condition in requires , which is however a typo and should be, as above, (cf. [10, Proposition 3.2.5 and Remark 3.2.7] for the original result). Examples for nonlinearities that fulfill these assumptions are mentioned in the introduction. The most common and physically relevant choices covered by our setting are power law nonlinearities for and in and in . Other physically relevant nonlinearities that fulfill the conditions are saturated nonlinearities appearing in the modeling of optical wave propagation such as for .
For the initial value we assume that and, without loss of generality, that it has a normalized mass, i.e. . With this, the considered nonlinear Schrödinger equation (NLS) reads as follows. For a maximum time and an initial value , we seek
such that and
in the sense of distributions. Problem (1) admits at least one solution, that is even unique for repulsive cubic nonlinearities in and (cf. [10] in general and [15, Remark 2.1] for precise references). We assume that the solution admits the following additional regularity, which is
where we note that any solution with such increased regularity must be unique (cf. [15, Lemma 3.1]). In the rest of the paper hence always refers to this uniquely characterized solution.
It is well known that solutions to the NLS (1) preserve the mass, i.e.
and the energy, i.e.
with .
For brevity, we shall denote the -norm of a function by . The -inner product is denoted by . Here, denotes the complex conjugate of .
Throughout the paper we will use the notation , to abbreviate , where is a constant that only depends on , , , , and , but not on the discretization.
3 Time-discrete Crank-Nicolson scheme
In this section we will state the semi-discrete Crank-Nicolson scheme, recall its well-posedness and available stability bounds, and then use these results to prove optimal -error estimates in the Hilbert space setting. For that, let denote the final time of computation, the number of time-steps, and the time step size. By we shall mean . The exact solution at time shall be denoted by . We also introduce a short hand notation for discrete time derivatives which is and analogously .
3.1 Method formulation and main result
With the notation above, the semi-discrete Crank–Nicolson approximation to is given recursively as the solution (in the sense of distributions) to the equation
where . The initial value is selected as . It is easily seen that the discretization conserves both mass and energy, i.e.
The scheme (3) is well-posed and admits a set of a priori error estimates. The properties are summarized in the following theorem that is proved in [15, Theorem 4.1].
Theorem 3.1.
Under the general assumptions of this paper, there exists a constant and a solution to the semi-discrete Crank-Nicolson scheme (3) that is uniquely characterized by the property that
and the a priori estimate for the -error
where is the (unqiue) exact solution with the regularity property (2).
Our main result on optimal error estimates in the reads as follows.
Theorem 3.2 (Optimal -error estimates for the semi-discrete method).
Consider the setting of Theorem 3.1, then the -error converges with optimal order in , i.e.
The theorem is proved in Section 3.2 below.
3.2 Proof of Theorem 3.2
In this section we well prove Theorem 3.2. Let us introduce some notation that is used throughout the proofs. We recall . Furthermore, we let and . For time derivatives at fixed time , we also write .
We begin by establishing a differential equation for the time discrete error . This is stated in the following lemma.
Lemma 3.3.
The error fulfills the identity
and for some bounded functions with the properties that
for almost all .
It is easily verified that exact solution fulfills
By the regularity assumptions we can apply Taylor expansion arguments to to see:
Subtracting (7) from (3) we find that satisfies:
where denotes the error coming from the nonlinear term, defined by
Recalling the definition of we have:
The expression for is thus simplified to
where is a function taking values between and and takes values between and . ∎
The differential equation in Lemma 3.3 is now used to derive a recurrence formula for the -norm of the error. Multiplying (5) by , integrating and taking the real part yields:
The idea is to bound the terms I and II in such a way that Grönwall’s inequality can be used. We proceed to bound term I. Multiplying the error PDE (5) by results in:
and consequently
In order to use Grönwall’s inequality we need to bound and in terms of , and terms of . These bounds are formulated in the two following lemmas.
Lemma 3.4.
Given the optimal -convergence of Theorem 3.1 and the uniform bounds (4), the error coming from the nonlinear term behaves as , i.e. .
We recall that and that is defined by
Since is twice differentiable we have:
where . Multiplying (3) by and taking the imaginary part gives us together with Theorem 3.1 that
and hence . The same argument, with the small difference that is bounded in virtue of the assumptions, can be applied to find
Using this and the standard mean value theorem we find that
Since it now follows that
Lemma 3.5.
Given Theorem 3.1, the gradient of the error coming from the nonlinear term is bounded as
Starting from the equality
we recall that
where . Using formula (14) with and we find:
where the remainder is bounded as . It is straightforward to check that . Taking the gradient of (3), multiplying by and integrating ( is interpreted weakly, is therefore integrated by parts), gives us:
Recalling it becomes clear that
This means that . Thus,
Using this and the mean value theorem we find:
As the exact same steps are valid for , with the difference that is bounded in virtue of the assumptions, the term becomes:
Since is three times differentiable it follows that:
where the remainder is bounded as . Using the mean value theorem we find:
Thus we are able to conclude:
With Lemma 3.4 and 3.5 we now have the following bound on term I.
Lemma 3.6.
For term I which is given by (3.2), we have the estimate
We can now proceed to bound term II. Here we explicate the Taylor term using (6) to see
We start with estimating and IId, which can be bounded in a similar way.
Step 1, bounding IIa:
By replacing |
501f41b0ff12b8b5 | Quantum Mechanical Model
Quantum Mechanical Model
Bohr's Cencept of well defined circualr orbits was discarded after the wave character of electron (De Broglie) and heisenberg's uncertainty Principle was established.Then came the concept of Quantum Mecahnics or wave mechanics which described the behaviour the electrons around the nucleus . Quantum Mechanics is the theoretical science that deals with the study of motion of microscopic objects that have both particle and wave nature.
Quantum mechanics is based on a fundamental equation which is called Schrodinger equation.
Schrodinger’s equation: For a system (such as an atom or a molecule whose energy does not change with time) the Schrödinger equation is written
Schrodinger’s equation
Here Hamiltonian is a mathematical operator called Hamiltonian.
Schrodinger gave a recipe of constructing this operator from the expression for the total energy of the system. The total energy of the system takes into account the kinetic energies of all the sub- atomic particles (electrons, nuclei), attractive potential between the electrons and nuclei and repulsive potential among the electrons and nuclei individually. Solution of this equation gives E and ψ.
Features of Quantum Mechanical Model
• The energy of electrons in an atom is quantized i.e. it can have only certain specific energy values.
• The existence of quantized electronic energy level is direct reset of wave properties of electron.
• Both exact position and exact velocity of electron cannot be determined simultaneously.The path of an electron in an atom therefore, can never be determined or known accurately
• IMPORTANT:- An atomic orbital is the wave function ψ for an electron in an atom. There are many orbital’s in an atom. Electrons occupy an orbital which has definite energy. An orbital can have more than two electrons the orbital are filled in increasing order of energy. All the information about an electron is stored in orbital wave function
• The probability of finding an electron in a point within an atom is proportional to ψ2 it is known as probability density and it is always positive from the value of ψ2 at different points. It is possible to predict the region around the nucleus where the electrons are most probability found.
Atomic orbital can be specified by giving their corresponding energies & angular momentum which are quantaised & expressed in terms of quantum numbers.
• The concept of orbit was discarded in favour of orbital i.e the most probable region of space around nucleus for the electron to be present
link to this page by copying the following text
Also Read
Latest Articles
Synthetic Fibres and Plastics Class 8 Practice questions
Class 8 science chapter 5 extra questions and Answers
Mass Calculator
3 Fraction calculator
Garbage in Garbage out Extra Questions7 |
443d78337ae3689a | Correlated Rotational Alignment Spectroscopy
Raman Lab Course Guidelines
In the 2021 lab course, you are expected to analyze molecular structures based on rotational Raman spectroscopic results. Here is the 2021 lab course guideline: PC_lab_course_CRASY_Schultz Here is a 2018 lab guide for setting up an experiment to measure vibrational Raman spectra: PC_lab_course_Raman_Schultz. You can find the ACS report templates on the ACS Webpage. Please remember to delete the first page of the template and to set the line spacing to double-spaced. Here is a guideline for Writing_in_English_(1-page_for_Koreans), explaining how to write a clear scientific text. Writing in English is a challenge for […]
Visualizing Rotational Wave Packets (superposition of spherical harmonics)
Attached below is a Python script to display spherical harmonics and spherical harmonic superpositions. Spherical harmonics are the angular wavefunctions (“eigenfunctions”) used to describe rotational states of molecules and the angular properties of electrons in atoms (i.e., “atomic orbitals”). A single eigenfunction of the Schrödinger equation is time-independent but dynamics can be described by a superposition (sum) of eigenfunctions. E.g., the square of a sum of spherical harmonics predicts the angular orientation of a molecule as function of time. To use the script, you need to have a functional version […]
CRASY: Observing Rotation in the Time Domain
Below, I posted a short video explaining how we observe rotation in the time domain. Please note that the video does not explain the quantum mechanical principles required to describe molecular rotation, but gives a pseudo-classical picture. This picture does not explain why molecular rotation is quantized or how we can relate rotational frequencies to molecular structure.
CRASY Data Analysis
Here is a guide for CRASY data analysis, written for a UNIST lab course: PC_lab_course_CRASY_Schultz.pdf. You can find an installation guide for the data analysis software on page 6 and a guide for CRASY data analysis on the following pages. If you want to learn more about the CRASY experiment, navigate to the landing page. Required Software You will require the following software: (1) Python (I suggest to install the “Miniconda” package.) (2) The crasyPlots program. If you have trouble installing or running the software, watch the walk-through video below. Data […]
Scientific writing Style-Guide
The following are some resources that may help you with your scientific writing. Scientific writing can be a challenge even for native writers, so don’t despair. Practice makes perfect and you have to write to learn to write. If you have to write a report, please use a template and submit your report with double line spacing for easier correction. I propose the template from JACS for MS Word or Latex. To master structure and style, please refer to the relevant Style Guides. Here is a short 1-page cheat sheet […]
IR Correlation Table and other Lab Course Material
The introduction slides for the lab course can be found on the UNIST Blackboard system. Vibrational absorption frequencies for common chemical groups are listed in the IR_correlation_table. Relevant vibrational Raman reference spectra [1] are summarized in this document: Literature Raman spectra. You can ask questions to the teaching assistants 인호 ( and Begum Ozer ( [1] SDBSWeb : (National Institute of Advanced Industrial Science and Technology, Aug. 9, 2016)
Affordable translation stage for spectroscopy
The translation stage ‘Standa 960-0060’ sells for $399 and offers a full-step resolution of 1.25 micrometer (200 steps/turn, 250 micrometer per turn). With factor 8 or 16 microstepping, the resolution should be sufficient for interferometric experiments with visible light (156 or 78 nm resolution). The stepper can be easily controlled with an Arduino-based USB controller for a cost of less than $40, but a little soldering is required. |
19cecf674af30c24 | Quantum Physics For Dummies, Revised Edition
Book image
Explore Book Buy On Amazon
In quantum physics, you can decouple systems of particles that you can distinguish — that is, systems of identifiably different particles — into linearly independent equations. To illustrate this, suppose you have a system of many different types of cars floating around in space. You can distinguish all those cars because they’re all different — they have different masses, for one thing.
Now say that each car interacts with its own potential — that is, the potential that any one car sees doesn’t depend on any other car. That means that the potential for all cars is just the sum of the individual potentials each car sees, which looks like this, assuming you have N cars:
Being able to cut the potential energy up into a sum of independent terms like this makes life a lot easier. Here’s what the Hamiltonian looks like:
Notice how much simpler this equation is than this Hamiltonian for the hydrogen atom:
Note that you can separate the previous equation for the potential of all cars into N different equations:
And the total energy is just the sum of the energies of the individual cars:
And the wave function is just the product of the individual wave functions:
except it stands for a product of terms, not a sum, and ni refers to all the quantum numbers of the ith particle.
As you can see, when the particles you’re working with are distinguishable and subject to independent potentials, the problem of handling many of them becomes simpler. You can break the system up into N independent one-particle systems. The total energy is just the sum of the individual energies of each particle. The Schrödinger equation breaks down into N different equations. And the wave function ends up just being the product of the wave functions of the N different particles.
Take a look at an example. Say you have four particles, each with a different mass, in a square well. You want to find the energy and the wave function of this system. Here’s what the potential of the square well looks like this for each of the four noninteracting particles:
Here’s what the Schrödinger equation looks like:
You can separate the preceding equation into four one-particle equations:
The energy levels are
And because the total energy is the sum of the individual energies is
the energy in general is
So here’s the energy of the ground state — where all particles are in their ground states, n1 = n2 = n3 = n4 = 1:
For a one-dimensional system with a particle in a square well, the wave function is
The wave function for the four-particle system is just the product of the individual wave functions, so it looks like this:
For example, for the ground state, n1 = n2 = n3 = n4 = 1, you have
So as you can see, systems of N independent, distinguishable particles are often susceptible to solution — all you have to do is to break them up into N independent equations.
About This Article
This article is from the book:
About the book author:
This article can be found in the category: |
cb290788a68f80e9 | SEMITIP V6, Technical Manual
Technical instructions are provided below for SEMITIP version 6, in the sections: Additional information on the theory behind the SEMITIP package can be found in Refs. [1]-[6] and in the documents:
The SEMITIP version 6 software package contains a set of programs for computing the electrostatic potential and resulting tunnel current between a metallic tip and a semiconductor sample. An brief introduction to the package is provided at the main SEMITIP V6 Web Page, including a discussion of various dimensionalities, geometries, and types of tunnel current computation that can be handled. A different program is employed for each dimensionality, geometry, or type of current computation. Links to those programs, including documentation for each, are included on the main SEMITIP V6 Web Page. The purpose of this Technical Manual is to describe various methods employed in the algorithms, coding, or usage of the SEMITIP package that are common to a range of the programs.
As mentioned above the SEMITIP package permits computations for many different dimensionalities, geometries, and types of tunnel current computations. The goal of the package is to both provide users with the capability to easily perform computations using existing programs suitable for their problem, and to enable them to develop new routines or programs for handling situations not covered by presently existing programs. This first goal could in principle be handled by a single master program that handled every possible type of situation. That code would, however, be very unwieldy since by its nature it would accomodate many different situations. For development of new code by users, generally there is some specific situation (i.e. dimensionality, geometry, type of current computation, etc.) that is being considered. Then, it is very much easier for a user to modify existing code that deals with that specific situation rather than dealing with code that covers every possible situation. This is the reason that separate programs have been developed to deal with the various dimensionalities, geometries, and types of tunnel current computation that are covered within the SEMITIP package.
Input Parameters
All of the programs in the SEMITIP package take their input from a file named FORT.9 (i.e. with a different FORT.9 file for each program). Comments within each of those files specify the meaning of each parameter, with additional information for certain parameters provided below. The line numbers in the description below are applicable to the Uni2 program for parameters having to do with computation of potential, to the UniInt2 program for parameters having to do with computation of current by integration of the Schrödinger equation, to the UniIntSC2 program for parameters having to do with self-consistency when computing the current by integration of the Schrödinger equation (only relevant for situations of inversion or accumulation), and to the UniPlane3 program for parameters having to do with computing the current using a plane wave expansion. However, the relevant line numbers for the FORT.9 files of other programs can be easily deduced from those listed below since the parameters within every input file are clearly labeled within the file itself.
Parameters for computation of potential: (line numbers according to FORT.9 file of Uni2)
Lines 7-18 refer to parameters defining the semiconductor. For an inhomogeneous semiconductor containing various regions of different doping or various surface area with different surface charge densities, see section on Inhomogeneous Geometry. Lines 21-28 refer to parameters that define the distribution in energy of surface states on the semiconductor. By default two types of distributions can be input, and the charge densities from these are summed together in the program for computing the total charge density (physically, many surface might have intrinsic and extrinsic surface states, the former from the states of the intrinsic surface atoms and the latter from defects on the surface, and hence the use of two distributions). The distributions themselves are defined in a user-defined function SIG which is part of the main program (for further discussion see section on User-defined Functions). The parameters below corresponding to the default form of the SIG function. By default two types of distributions are possible: a uniform distribution of states and a distribution containing two Gaussians. In the first case the parameters are just the density of states and the charge neutrality level [i.e. that separates states of acceptor character (negatively when occupied by an electron and neutral when empty) from those with donor character (neutral when occupied by an electron and positive when empty)]. To use this type of uniform distribution, then the value of the FWHM parameter should be zero (and the centroid energy is not used by the program, although it still should be present in the FORT.9 parameter list). For the Gaussian type distribution, nonzero values for the FWHM and centroid energies are supplied. The former is the full-width-at-half-maximum for each Gaussian and the latter is the position of the peak of each Gaussion relative to the charge neutrality level (there's one Gaussion below that level and one above it). One additional issue regarding surface states is whether or not the distributions change (e.g. go to zero) for energies with the bulk valence or conduction band. This constraint can applied by a line of the code within the SIG routine, and for the default versions of SIG there is no change of the distributions when the energies fall within the bulk bands.
Lines 30-36 refer to parameters that determine details of the finite-difference solution for the potential. This solution is performed first on an a specified number of grid points. Then, the number of grid points is doubled (and their spacing halved), and the computation performed on that doubled grid (using as an initial condition the solution from the previous grid). This procedure of doubling the number of grid points in each coordinate direction is referred to as "scaling" of the solution, and the scaling procedure is repeated a specified number of times. The finite-difference procedure itself is an iterative one, and within each scaling step it is performed until the change in Pot0 (the potential at the point on the semiconductor surface opposite the tip apex) is less than a specified convergence parameter value for both the present iteration and the previous one, or the number of iterations exceeds a specified maximum number. Concerning the initial spacing between grid points, this is straightforward for the points in the vacuum, but in the radial direction or in the direction into the semiconductor some estimation of the initial spacing must be made. This estimation of made based on values of semiconductor doping and tip radius as follows: First a 1D depletion length of the semiconductor is computed, sqrt(2 epsilon ΔV / e2 N ) where epsilon is the dielectric constant and ΔV corresponds to the tip-sample voltage or to 1 V, whichever is greater. The grid size is initially assigned the value of the tip radius. If the radius of a protrusion on the end of the tip is nonzero, then the grid size is assigned to that value or to its previous value, whichever is less. Then, the grid size is assigned to the above 1D estimate of depletion length divided by the number of grid points, or to its previous value, whichever is less. Finally, this value for the initial grid size is multiplied by the factor on line 33 (e.g. to reduce its value further, and hence achieve a finer grid). The grid size thus determined is used for both the radial direction and the z-direction into the semiconductor, except if the multiplier parameter on line 33 has a value <= 0. In that case, the following two additional lines provide the values of the grid size in the radial direction and the z-direction (or, for a 1D computation, just one additional line is used for providing the grid spacing in the z-direction). The resulting grid sizes are used to compute the spatially varying spacing between grid points using the algorithm described in the section on Spatial Coordinates.
Parameters for computation of tunnel current by integration of Schrödinger's equation: (line numbers according to FORT.9 file of UniInt2)
Computation of Bulk Charge Densities
Bulk charge densities are computed in the effective-mass approximation, using equations as specified in Ref. 1. These charge densities are evaluated repeatedly within the programs - at each grid point, each iteration of the solution, and each 1D search step to solve the nonlinear aspect of Poisson's equation. To reduce the computation time needed for these evaluations, a table of charge density values to computed at the start of each program (or multiple tables are computed, for an inhomogeneous semiconductor). The maximum possible number of elements in this table is specified by NEDIM in the PARAMETER statement within each program, and the actual number of element used is specified in the FORT.9 input file (line 37). The program uses a particular algorithm to decide on the energy bounds of this this table, and if a charge densities value is needed that falls outside of these bounds then that charge density is evaluated from the defining equations.
Computation of Surface Charge Densities
The presence of electronic surface states give rise to surface charge densities. There are in general two broad categories of surface states that can exist on the semiconductor, one being intrinsic states from the dangling bonds that exist within each unit cell of the structure, and the second being extinsic states that are sparsely distributed over the surface and arise from defects such as adsorbates or surface steps. For surfaces such as GaAs(110) that do not have any intrinsic states within the band gap, tip-induced band bending plays a particularly large role. Such surface can still have extrinsic surface states, which will affect this band bending. (Also, even for GaAs(110), intrinsic surface states located within the conduction band play an important role for situations of electron accumulation, as discussed in Ref. [5]). By default, the SEMITIP programs allow for input of two type of surface charge densities. Both of these densities are treated identically within the program, with the reason for allowing two inputs being to treat situations such as GaAs(110) for which both intrinsic and extrinsic states can play a significant role in determining the band bending.
Surface charge densities are defined by user-specified distribution(s), defined in the routine SIG. In the default form of those routines, a distribution that is uniform in energy, or one consisting of two Gaussian functions, are provided, but any other function can also be specified. (In principle, a function that varies very quickly with energy may be difficult for the program to handle, but no such limitation has yet been encountered). In addition to the energy distribution, the charge neutrality level for the distribution must also be specified, which is the energy above which the states are negative when filled and neutral when empty (i.e. acceptor like, as for conduction band states) and below which they are neutral when filled and positive when empty (i.e. donor like, as for valence band states). In analogy to the procedure for bulk charge densities, a table of surface charge densities is constructed at the start of each program. For the surface charge densities in particular this table assumes a zero temperature form for their occupation. Computations using occupations computed at nonzero temperature are possible (using the switch on line 29 of the FORT.9 input file), but in that case the table is not used and the occupations are always computed from the defining equations, thus requiring considerably more computation time. It is rare that the explicit inclusion of the temperature dependence of the occupation significantly affects the results, i.e. to an extent beyond what could be accomplished by a slight shift in one of the surface state parameters (such as the charge neutrality level), so that in most cases this temperature dependence need not be included in the computation.
It is important to realize that SEMITIP, in its present form, does not allow for any computations of tunnel current associated with the defined distributions of surface states. Rather, the definied charge density of the surface states is used only in computing the electrostatic potential, whereas the tunnel current is based entirely on bulk bands (possibly including localized regions of the potential as occur e.g. for a buried quantum dot). In some future version of SEMITIP it is possible that currents due to surface states could be included; the surface states in that case would be defined by a surface band structure, and the charge density for that band of states would be computed by an integral of the band structure (rather than be a user-defined density-of-states, as presently used).
A computation of electrostatic potential is said to be self-consistent if the solution for the potential contains full and proper dependence on the electronic states of the problem, with the electronic states themselves depending on the potential. The default mode of charge density computation within SEMITIP is a semi-classical one, in which defined bands of bulk or surface states are shifted in accordance with the electrostatic potential and they are occuppied according to a fixed (known) Fermi energy. For a semiconductor in depletion, this type of computation is automatically (trivially) self-consistent, since the only relevant charge density of the problem is that due to the ionized dopants and this charge density is accurately treated in the semi-classical approximation. Incorporating user-defined distributions of surface states into the solution for the potential is also self-consistent in the same trivial sense (assuming that those surface states are not modified by the application of the potential).
Nontrivial situations of self-consistency occur for accumulation or inversion in the semiconductor. These situations are not properly handled by a semi-classical treatment (at least not when only one or a few quantum states are occupied), but the UniIntSC1 or UniIntSC2 programs are specially designed to treat these cases. The quantum states are computed by the intcurr routine, i.e. the same routine that is used for computing tunneling currents, but now used for constructing charge densities are needed for the self-consistent computation. The computation of the 3D charge densities is performed as a series of 1D computations at different r values, something that is appropriate only for a potential that varies slowly in the radial direction.
For self-consistent computations with an even more localized potential, e.g. as might occur at a quantum dot or near a point charge on a surface, the computation of the charge densities would have to employ the plane wave expansion method, as in UniPlane3. The result would be limited to a limited region about the point on the surface opposite the tip apex, i.e. a periodic repetition of this region. In any case, a program for handling such computations has not been developed to date.
Specification of Fixed Charge on Surface or in Bulk
A problem of considerable interest is the computation of the potential (and possibly the current) in the presence of a fixed charge, e.g. a point charge, on or near a surface. SEMITIP was not originally designed to handle fixed charges; rather, the parameters within the FORT.9 file refer everywhere to charge densities that vary in accordance with the Fermi energy. Fixed charges, whose magnitude does not vary with the Fermi energy, can still be handled in the program, but they are handled in a slightly "kludgy" manner making user-defined modifications to the RHOSURF or RHOBULK routines. See example 6 of Uni2 or example 2 of Uni3. In these examples, the location and magnitude of the fixed charge is explicitly defined with the RHOSURF or RHOBULK routine. An alternative method, of course, is for the user to modify the input stream of the FORT.9 file to include such parameters, and they then would be passed to the RHOBULK or RHOSURF routines through a user-defined COMMON block.
It is important to note that a number of limitations exist for computations involving fixed charges:
1. First, the reader is reminded that the default manner in which SEMITIP handles all charge densities is a semi-classical one. This method is only valid for a slowly varying potential. Some situations that require explicit quantum computations are also handled within SEMITIP, such as semiconductor accumulation or inversion, but the package does not automatically handle all such situations. For a point charge on or near a surface, the electrostatic potential when the semiconductor is in depletion can be handled by the program (subject to some additional limitations described below), but situations of accumulation around the point charge (that is, occupation of localized states arising from the point charge as well as screening by extended states around the point charge) are not properly handled by the present package since it does not handle self-consistent computations with a plane wave basis. (Even if it did handle such computations, the limited spatial extent of any plane wave type computation would also be a significant restriction on the results).
2. A second limitation in defining fixed charge on the surface or in the bulk has to do with the grid used in the finite-difference computation. As illustrated in example 6 of Uni2 and example 2 of Uni3, the spatial extent of the charge is defined by statements within the RHOSURF or RHOBULK routines. However, if that spatial extent does not precisely overlap with the areas or volumes subtended by the grid points of the computation, then the actual charge that is computed will differ from the intended charge. Hence, a sufficiently fine grid and/or an extended charge must be used so that this discrepency is not too large. For a charge located some distance from the tip position (central axis), it's no problem to model that charge as being extended over some region (since that effect of the charge is essentially the same as if it were localized at a single point). But, as the charge approaches the central axis, then this area or region over which it is definied must become smaller and smaller, with the grid size for the computation necessarily being correspondingly small. The user must ensure that these sizes for the spatial extent of a fixed charge and for the grid are appropriate to the particular situation being considered.
Solution of Poisson's Equation
Poisson's equation, including the boundary condition at the semiconductor surface and possible nonlinearity arising from bulk or surface charge densities, is solved according to the method described in Ref. [1] (also see footnote 36 of Ref. [2] and point 1 of the Internal Parameters section below).
Spatial Coordinates and Grids
The solution for the potential is accomplished using cylindrical coordinate in the semiconductor and generalized prolate spheroidal coordinates in the vacuum. The latter are chosen to precisely match the shape of the probe tip (not including any protrusion on the end of the tip), as described in Ref. [4] and in Coordinate System with accompanying Diagram. The spacing of the grid points in these coordinates is not uniform, but it increases as the radial distance r away from the central axis increases or as the distance z into the semiconductor increases. For computation of the tunnel current, this type of grid with quite large step sizes at large r or z is not suitable, and hence new grids with more nearly uniform spacing are formed. This process is done by the routine POTEXPAND for a tunnel current computation by integration of the Schrödinger equation or by POTPERIOD3 for a computation using the plane wave expansion method).
Computation of Tunnel Current
SEMITIP presently does not allow computation of tunnel currents from surface states. For the case of bulk states, two methods are available for computation of the tunnel current. In both, the Bardeen method together with the Tersoff-Hamann approximation for a sharp tip is used to write the current as a summation of the quantum states of the semiconductor (and the metallic probe tip). To obtain those states, the first method employs a "central axis approximation" in which numerical integration of the Schrödinger equation is performed only along the central axis of the problem. This method would be an exact solution for a planar geometry; it is an approximation for a nonplanar geometry but it still works quite well even for probe tips as sharp as 1 nm radius-of-curvature (see Ref. [6]). However, if regions of localized potential exist in the semiconductor, such as those due to a quantum dot, then that method in not applicable. Hence, the second available method is an expansion using plane waves (matched to decaying exponential in the vacuum), from which the solution of the Schrödinger equation is found by the usual eigenvalue method. Due to the significant computational demands of this plane wave method, it can be applied only over a limited region of space centered around the point on the semiconductor surface opposite the tip apex.
Inhomogeneous Geometry
A new feature in version 6 of SEMITIP is the ability to handle inhomogeneous semiconductors, with the inhomogeneity existing in the bulk or on the surface. Actually, even in the prior versions of SEMITIP it was possible to handle certain special types of inhomogeneity (such as the presence of fixed charges) by explicit modification of the RHOBULK or RHOSURF routines, as described above under Specification of Fixed Charge on Surface or in Bulk. But within version 6 a general method for handling inhomogeneity in many of the parameters describing the bulk or surface has been implemented. Such inhomogeneity does not represent any problem within a finite-difference type of computation that SEMITIP employs for the electrostatic potential, although whether or not a computation of tunnel current can truly handle an inhomogeneity depends very much on the range of the varying potential, as further discussed under Computation of Tunnel Current.
Inhomogeneities within the bulk of the semiconductor are referred to as different "regions" of the material, and inhomogeneities on the surface are referred to as different "areas". The user defines the spatial boundaries of the different regions or areas by using user-defined functions RHOBULK or RHOSURF, respectively, as described in the following section. Parameters specifying the properties of the different regions or areas (e.g. their doping or their surface state density) can, in most cases, be input in the normal fashion from the FORT.9 input file. The Mult1, Mult2, and Mult3 programs give examples of these procedures. Again, for specification of fixed charges within the bulk or on the surface the situation is different, as described under Specification of Fixed Charge on Surface or in Bulk.
In the present form of SEMITIP V6, computations of tunnel current for an inhomogeneous geometry are made using only a constant effective mass in the semiconductor. That mass is the one from the first defined region of the semiconductor. Thus, even if different masses for the various regions are defined within the FORT.9 input file, only the effective mass from region 1 is used in the computation of the current. (This limitation could be removed with some modest extension of the intcurr routine, but that extension has not yet been implemented).
When multiple regions are defined in the bulk material, then the locations of their respective valence band maxima can differ. Many of the quantities intput or output from SEMITIP (e.g. Fermi energy, charge neutrality level, etc.) use the valence band maximum as a zero of energy. In the situation with multiple regions, it is the valence band maximum of the region located at the origin of the coordinate system that is used as a zero of energy for all quantities, i.e. even those quantities associated with other regions.
User-defined Functions
A variety of user-defined functions are included in the same file as the main calling program. These functions allow definition of functions which cannot, in general, be specified simply by numbers in the FORT.9 input file. For example, a protrusion of arbitrary shape can be appended onto the end of the hyperbolic tip. Or, surface states of any arbitrary distribution in energy can be defined and their effect included in the computations of the potential. All user-defined functions are already included in the general purpose programs of the SEMITIP package, with default values for the functions as specified below:
The vast majority of SEMITIP is written using standard FORTRAN features, and it thus should be compatible with any FORTRAN platform. The programs are however designed for use under Gnu FORTRAN, and possibly a few compiler-specific FORTRAN features may be employed. For example:
COMMON blocks
COMMON blocks serve an important role in the SEMITIP programs by enabling the passing of parameters between the main programs and the user-defined functions, and in some cases to various supporting routines as well. The declarations and role of the various COMMON blocks is specified below. In situations when the block is only used between the main program and a user-defined function (i.e. it does not appear in any supporting routine), then modification by the user to accomodate some new situation is possible.
Internal Parameters
Nearly all the parameters that control the functioning of the SEMITIP programs can be accessed through either the FORT.9 input file, the user-defined functions, or the PARAMETER and COMMON statements described above. There are however a few additional parameters that are internal to certain routines, modification of which require editing the source code and then re-compiling and linking. These internal parameters include:
1. A one-dimensional search is used as part of the finite-difference method for solving the nonlinear Poisson equation, as described in Ref. [1]. This search is performed to an accuracy in terms of energy of max(PotTip,1)/106, where max(PotTip,1) is the potential on the probe tip or 1 eV, whichever is greater. This precision parameter occurs several times within the semitip1, semitip2, and semitip3 routines. It is difficult to imagine a situation where this parameter would need to be changed, since a worse precision would not save a significant amount of computation time and it's unlikely that a higher precision would ever be needed (except for situations with very low temperature and a small energy scale, in which case possibly the precision of the entire computation would also require modification).
2. The IBC parameter specifying the boundary condition for the potential computation is set to 0 (Dirichelet boundary condition) at the top of the semitip1, semitip2, and semitip3 routines. See Boundary Conditions for more discussion of this parameter. As discussed there, a value of 1 for Von Neumann boundary conditions can be used, but it's useful only for particular cases and the value of 0 is most suitable for general purposes.
3. For both bulk and surface charge densities, tables of those values are constructed in the main program as discussed in the sections on Computation of Bulk Charge Densities and Computation of Surface Charge Densities. The energy bounds of those tables is determined by a particular algorithm within the main program. For some special situations charge densities might be required that are outside this range, although that's not a problem since in those cases the densities are evaluated from the defining equations. Nevertheless, if it is desired for some reason to change the bounds of the charge density tables, than that must be done by modifying the source code in the main program.
4. For self-consistent computations, there's a parameter called INITCD within the UniIntSC1 and UniIntSC2 programs that controls the initialization of the array used for constructing the charge densities. By default this parameter to set to 1, which corresponds to the array being initialized to semi-classical values. An alternative is INITCD=0 which initializes the array to zero, but that hasn't proved useful in any computations to date.
5. For computation of tunnel current, a two-band model for the band structure is not used. There are situation however where use of this model is appropriate, as discussed in Kane's Two-band Model. That document also explains how to modify the intcurr routine to enable a two-band type computation.
Changes relative to SEMITIP versions 1-5
Due to the considerable expansion in the capability of SEMITIP version 6 compared to versions 1-5, a number of changes to the naming conventions for programs and variables were needed. These changes included:
Additionally, in version 6, considerable repackaging of the routines occurred so as to enable the common usage of particular routines across many of the specific calling programs. The new program structure is described in the sections above.
For users upgrading from versions 4 or 5 to version 6, a few small changes in the input files (FORT.9) should be noted. In version 6, all input related to the computation and plotting of the potential is given before that for the computation of current. Hence, e.g. whereas in versions 4 or 5, specification of parameters for the contour plots occurred at the very end of the input file, those entries now occur at the end of the entries for the potential computation, and they are followed by the input parameters (if any) for the computation of the tunnel current.
One technical change occurred in version 6, namely, the introduction of a new method for computing derivatives for the variable-spacing grids that are employed in the semitip.f finite-difference routine. This method is further described in Computation of Derivatives for Variable Grid Spacing.
1. R. M. Feenstra, Electrostatic Potential for a Hyperbolic Probe Tip near a Semiconductor, J. Vac. Sci. Technol. B 21, 2080 (2003). For preprint, see
2. R. M. Feenstra, S. Gaan, G. Meyer, and K. H. Rieder, Low-temperature tunneling spectroscopy of Ge(111)c(2x8) surfaces, Phys. Rev. B 71, 125316 (2005). For preprint, see
3. R. M. Feenstra, Y. Dong, M. P. Semtsiv, and W. T. Masselink, Influence of Tip-induced Band Bending on Tunneling Spectra of Semiconductor Surfaces, Nanotechnology 18, 044015 (2007). For preprint, see
4. Y. Dong, R. M. Feenstra, M. P. Semtsiv and W. T. Masselink, Band Offsets of InGaP/GaAs Heterojunctions by Scanning Tunneling Spectroscopy, J. Appl. Phys. 103, 073704 (2008). For preprint, see
5. N. Ishida, K. Sueoka, and R. M. Feenstra, Influence of surface states on tunneling spectra of n-type GaAs(110) surfaces, Phys. Rev. B 80 075320 (2009). For reprint, see
6. S. Gaan, G. He, R. M. Feenstra, J. Walker, and E. Towe, Size, shape, composition, and electronic properties of InAs/GaAs quantum dots by scanning tunneling microscopy and spectroscopy, J. Appl. Phys. 108, 114315 (2010). For preprint, see |
570171d11d68f053 | In 2013, David Lopez and I discovered some weird MRI signals. When I saw the time series data, I knew immediately that those signals were generated by something we couldn't explain with conventional MRI physics.
Around 6 years earlier, I had started thinking about quantum brain processes. For me, it was clear from the beginning that a quantum brain is not restricted to microscopic processes. Instead, I believe the quantum brain works far beyond Planck's constant.
Therefore, I was thinking about alternative quantum models in 2013 which were based around the Fokker-Planck equation. I knew the Fokker-Planck equation well because I had used it previously in my PhD thesis to quantify cerebral blood perfusion. And of course, the Fokker-Planck equation can be transferred into a Schrödinger equation.
Having quantum in mind, David and I collected some more data to get a cohesive story together. However, we realized rather quickly that no one others than esoterics believe in macroscopic quantum brain mechanics.
We were confronted with an enormous respect for quantum mechanics. It seems that quantum mechanics has become untouchable and somehow "holy". Of course, the success of quantum mechanics can't be overlooked, but it was developed for non-relativistic closed systems. Why should quantum mechanics not work in open systems?
Quantum theories for biology have to step away from that thinking; biology is not a closed system. It also isn't classical. Why isn't it classical? Because if it was, we could easily explain what life or consciousness is.
Here, we want to promote the idea that life is quantum beyond Planck's constant. We will include in our discussion what hadn't had much space in science yet; consciousness. Why would anybody leave it out in a unified theory? If physics intends to unify their laws, consciousness must be in there. And against all odds, only because we can't explain it, doesn't make it unscientific. It is the other way around. To leave it out shows that we miss something essential.
In the following, I will present my collection of thoughts around quantum biology, quantum brain and mind etc. |
569354ec2a4bde1b | Scholarly article on topic 'Focus on Cold and Ultracold Molecules'
Focus on Cold and Ultracold Molecules Academic research paper on "Nano-technology"
Share paper
Academic journal
New Journal of Physics
OECD Field of science
Academic research paper on topic "Focus on Cold and Ultracold Molecules"
Home Search Collections Journals About Contact us My IOPscience Challenges of laser-cooling molecular ions
Download details: IP Address:
This content was downloaded on 23/12/2014 at 05:11
Please note that terms and conditions apply.
New Journal of Physics
The open-access journal for physics
Challenges of laser-cooling molecular ions
Jason H V Nguyen14, C Ricardo Viteri2 4 5, Edward G Hohenstein3, C David Sherrill3, Kenneth R Brown2,6 and Brian Odom1,6
1 Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA
2 Schools of Chemistry and Biochemistry; Computational Science and Engineering; and Physics, Georgia Institute of Technology, Atlanta, GA 30332, USA
3 Center for Computational Molecular Science and Technology,
School of Chemistry and Biochemistry, and School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA E-mail: and
New Journal of Physics 13 (2011) 063023 (28pp) Received 21 March 2011 Published 13 June 2011 Online at
Abstract. The direct laser cooling of neutral diatomic molecules in molecular beams suggests that trapped molecular ions can also be laser cooled. The long storage time and spatial localization of trapped molecular ions provides an opportunity for multi-step cooling strategies, but also requires careful consideration of rare molecular transitions. We briefly summarize the requirements that a diatomic molecule must meet for laser cooling, and we identify a few potential molecular ion candidates. We then carry out a detailed computational study of the candidates BH+ and AlH+, including improved ab initio calculations of the electronic state potential energy surfaces and transition rates for rare dissociation events. On the basis of an analysis of the population dynamics, we determine which transitions must be addressed for laser cooling, and compare experimental schemes using continuous-wave and pulsed lasers.
S] Online supplementary data available from mmedia
4 These authors contributed equally.
5 Present address: Entanglement Technologies, Inc., 42 Adrian Ct., Burlingame, CA 94010, USA.
6 Authors to whom any correspondence should be addressed.
New Journal of Physics 13 (2011) 063023 1367-2630/11/063023+28$33.00
© IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
1. Introduction
14 21 22 22
2. General experimental considerations
2.1. Diagonal Franck-Condon factors .
2.2. Elimination of rotational branching
2.3. Other decay paths.........
3. Example laser cooling cycle for BH+ and AlH
3.2. Cooling scheme
3.3. Simulation results . . .
4. Prospects for cooling
5. Conclusions Acknowledgments Appendix. Potential candidates References
1. Introduction
The long-held notion that laser-cooling molecules is infeasible has recently been overturned by the transverse laser cooling of SrF [1, 2]. The direct Doppler cooling was possible due to the nearly diagonal Franck-Condon factors (FCFs) of the A-X transition and strong optical forces resulting from the short excited-state radiative lifetime. Diagonal FCFs minimize the number of vibrational states populated by spontaneous emission during the cooling time. Similar to previous proposals [3,4], the number of relevant rotational states involved in the cooling cycle is minimized by judicious choice of the initial angular momentum state [1].
In ion traps, laser-cooled atomic ions can be used to sympathetically cool any other co-trapped atomic or molecular ion species [5-7], and this technique is being used to study gas-phase atomic and molecular ions at very low temperatures in laboratories around the world. For example, sympathetically cooled ions have been used for mass spectrometry [8, 9], precision spectroscopy [10-15] and reaction measurements [16-22]. Because the impact parameter in ion-ion collisions is much larger than the molecular length scale, the internal states of sympathetically cooled molecular ions are undisturbed. A low-entropy internal state can be prepared either by producing the molecular ion through state-selective photoionization [23] or by taking advantage of the long lifetime of the sympathetically cooled molecular ions to optically pump the internal degrees of freedom [24-26]. Now that internal state control of sympathetically cooled molecular ions has been demonstrated, it is interesting to consider the possibility of eliminating the atomic coolant ions (and the accompanying laser equipment) and directly Doppler cooling certain molecular ions. Since molecular ions remain trapped for exceedingly long times independent of internal state, ion traps relax requirements on molecular transition moments and repump rates, thus offering a unique environment for molecular laser cooling. Direct cooling would also be desirable when the coolant ion may create unwanted complications (e.g. quantum information processing using trapped polar molecular ions coupled to external circuits [27]) or when the light driving the atomic cycling transition results in
a change in the molecules' internal state (e.g. resonant absorption or photodissociation). Achieving the necessary cycling of a strong transition would also allow straightforward direct fluorescence imaging of trapped molecular ions at the single-ion level.
On the basis of spectroscopic data available in the literature (see the appendix), our molecular ion survey found that BH+ [28-33] and AlH+ [29], [34-37] are among the most promising candidates. In this paper, we review all the identified challenges in maintaining a closed excitation scheme for the direct laser cooling of BH+ and AlH+ stored in ion traps. Unlike in the case of SrF, where slow vibrational decay relative to the interaction time allows a straightforward probabilistic approach to predicting repump requirements [1], designing a laser cooling experiment for trapped ions requires the careful modeling of vibrational decays within the ground state, since they result in the diffusion of parity and rotational quantum numbers. Additionally, for BH+ and AlH+ as with all molecular Doppler cooling experiments, rare decay and photofragmentation processes need to be considered, since they can occur on timescales comparable to Doppler cooling and much shorter than the ion trapping lifetime.
This paper is organized as follows. In section 2, we discuss the molecular properties required for Doppler cooling. In section 3, we consider in detail two cooling candidates, BH+ and AlH+, present new quantum chemical calculations for these species (section 3.1.1), discuss the calculation of the Einstein coefficients (section 3.1.2), and present the rate-equation model used to determine the number of cooling photons scattered (section 3.1.3). In section 3.2, we discuss the cooling scheme, including the effect of spin-rotation and spin-orbit splitting of the 2£ + and 2 n states, respectively. The results of our simulation are given in section 3.3, and we present our calculations for rare events, which terminate the cycling transition, in section 3.3.3. In section 4, we discuss the technological requirements for carrying out the proposed experiments, and we propose the use of femtosecond lasers to provide multiple repump wavelengths from a single source. Finally, section 5 summarizes the concepts, simulations and prospects of the Doppler cooling of molecular ions. In the appendix, we present a few additional classes of molecules that need to be studied in more detail to decide whether they are candidates for direct laser cooling experiments.
2. General experimental considerations
Doppler laser cooling requires the scattering of many photons, with each emitted photon on average carrying away an amount of energy proportional to the laser detuning from resonance [38]. To cool a two-level particle with a mass of ~ 10amu and a visible transition
linewidth of ~ 1 MHz from room temperature to millikelvin temperatures requires the scattering of 104-106 photons. Two-level systems are easily obtained in atoms, e.g. 2P-2S transitions in alkali metals (for neutrals) and alkaline earth metals (for ions). In both cases, the ground states have a closed shell with one valence electron, and the state of the atom is well described by the orbital and spin angular momentum of the valence electron [39].
Electronic two-level systems are impossible to obtain in molecules since the vibrational and rotational degrees of freedom introduce multiple decay paths not present in atoms. Spontaneous decay into non-cycling bound or dissociative states generally terminates the cycling transition
long before the Doppler cooling limit is reached; however, a carefully chosen molecule can significantly reduce the probability of such decays. The ideal molecule would possess an excited and electronic ground state with equivalent potential surfaces separated by an energy offset in the optical region, resulting in perfectly diagonal FCFs. The most intuitive of the types of
transitions that might lead to diagonal FCFs [40] are transitions that excite a single electron from the highest occupied molecular orbital (HOMO) to the lowest unoccupied molecular orbital (LUMO). This type of transition will have negligible effect on the bond length when both the HOMO and LUMO are either non-bonding or anti-bonding orbitals [41]. Alternatively, one may choose a molecule with an unpaired electron and a corresponding hole in one of the highest bonding orbitals, in which the optical transition moves the hole to another bonding orbital.
In addition to considering the FCFs, one must also take into account the effect of rotational branching due to spontaneous emission, unexpected decay paths due to the breakdown of the Born-Oppenheimer approximation and dipole-forbidden transitions.
2.1. Diagonal Franck-Condon factors
When a decay from the excited state to the ground state occurs, the branching ratio into final vibrational states is determined from the Franck-Condon overlap, which measures the similarity between the excited and ground vibrational wave functions [42]. Molecules that have similar excited-state and ground-state potential energy curves (PECs) reduce the total number of repumping lasers required, so it is useful to determine the repumping requirements as the two PECs are made more dissimilar.
High-accuracy quantum-chemical calculations of the excited PEC are not typically available; however, the Morse potential, with only three parameters, is a reasonable approximation for many bound electronic states. This approximation is particularly true near the bottom of a potential well, which is energetically distant from the region of non-adiabatic couplings that alter the dissociative asymptotes of the potentials. Assuming excited vibrational levels have infinite lifetimes, the number of transitions that are required to approximately close a cycling transition depends strongly on all of the three parameters. Figure 1 shows the effect of changing the equilibrium bond distance and vibrational frequency for fixed anharmonicity. The number of vibrational levels that must be addressed to have less than one part-per-million population loss is very sensitive to differences in the ground- and excited-state bond lengths. In the ideal case of perfectly diagonal FCFs, a single laser is required; a difference of only 5% in the bond lengths requires four to seven lasers.
2.2. Elimination of rotational branching
In the dipole approximation, each emitted photon can change the total angular momentum of the molecule by 0, or ±1h. The emission results in the population of additional states through Stokes and anti-Stokes processes. Each additionally populated angular momentum state will need to be addressed in order to achieve a high scattering rate and efficient laser cooling.
The number of angular momentum states populated depends on which Hund's case, (a) or (b), is important [42]. For the purpose of our discussion here, we will initially ignore spin (equivalent to Hund's case (b)) and later look at complications that occur in Hund's case (a). We will also ignore nuclear spin and the resulting additional hyperfine states and transitions (the resultant hyperfine splitting is sufficiently small that, in practice, these additional states can easily be addressed by the use of an electro-optical modulator (EOM) [1, 2]).
A-doubling breaks rotational symmetry about the bond axis, resulting in electronic states of well-defined symmetry relative to the plane defined by the axis of nuclear rotation and the bond axis. A key feature of the SrF experiment [1, 2] is pumping from the ground state v' = 0, N' = 1,
^ 2950
O 2900
§ 2850
§ 2800 o
Itf 2750 C
iop Institute of Physics <j) deutsche physikalische Gesellschaft
A state equilibrium distance (A)
Figure 1. The effect of excited state vibrational constant and equilibrium bond length on the number of optically pumped vibrational states required in order to have less than one part-per-million population loss after spontaneous emission. Calculations are based on probabilistic analysis of FCFs of a hypothetical A-X system. White space represents parameter regions where at least seven vibrational bands must be addressed. The ground- and excited-state PECs are represented by Morse functions with a fixed anharmonicity of <exe of 50 cm-1.
The ground state, X, has an equilibrium distance Re of 1.32 A and a vibrational constant <e of 2750 cm-1. Laser cooling with one laser is possible only in a narrow region (in black) where the A-state spectroscopic constants are similar enough to Re and <e.
K' = 1 with negative parity to the excited state v = 0, N = 0, K = 1 with positive parity, where N is the rotational angular momentum of the molecule and K is the total angular momentum without spin. As a result, decay from the excited state is limited to those v allowed by FCFs and only via the Q branch. The desired transitions and the expected spontaneous emission channels are shown in figure 2.
This method requires the control of the initial angular momentum population, as not all the molecules in the experiment start out with the desired N or J. However, the preparation of a specific initial state of the molecular ion can be achieved by laser cooling the internal states [24-26], internal state cooling by interaction with cold neutral atoms [43], using a cryogenic ion trap [44] to limit the number of energetically accessible states and the rate at which they mix or using state-selective photoionization of a neutral molecule [23].
Additional rotational states can be populated by blackbody redistribution between different rotational states or by the decay of vibrationally excited states through spontaneous emission. This process does not occur for homonuclear diatomic ions. For heteronuclear diatomic ions, blackbody redistribution results in an increase in the number of required repumps if the redistribution occurs on cooling timescales. For BH+ and AlH+ we find that rotational
iop Institute of Physics <j)deutsche physikalische Gesellschaft i i ^
v'=l v'=0
Figure 2. Energy level schematic of the proposed 2n-2D transition. Red dashed levels are odd parity and blue solid levels are even parity. K refers to the total angular momentum without spin, and v and v' label the vibrational levels of the excited and ground states. Starting in the K = 1 state and exciting the Q-branch (solid arrows) results in a closed transition modulo the vibrational states (wiggly arrows).
redistribution requires tens of seconds, which is longer than the cooling time and is included in our simulation (see section 3.3). Vibrational radiative relaxation can limit the number of vibrational bands that must be addressed, at the cost of repumping from a larger set of rotational states due to the diffusion in J during the vibrational cascade.
If vibrational relaxation is sufficiently fast, a probabilistic FCF approach to determine repumping requirements is no longer valid, since the FCF approach assumes vibrational levels are infinitely long-lived. For example, if we cycle on the v' = 0 ^ v = 0 transition and repump on the v' = 1 ^ v = 0 transition, the ion will occasionally decay via the v = 0 ^ v' = 2 ^ v' = 1 and the v' = 1 ^ v' = 0 channels, resulting in parity flips and the occupation of higher rotational states.
A simple figure of merit (FOM) for the probabilistic FCF approach is calculated by comparing the total time required to Doppler cool the molecule with the vibrational decay rate, since all levels are approximately equally populated at the steady state. The FOM is given by , in which A00 is the excited-state decay rate, N is the total number of photons
scattered to Doppler cool and A1/0/ is the vibrational decay rate. Using an estimate of N ^ 105, we calculate an FOM of 1, 20 and 250 for BH+, AlH+ and SrF, respectively. We expect that the effect of vibrational decay is most significant for BH+, followed by AlH+, which have much smaller FOMs than does SrF. Our rate-equation simulations (see section 3.3) confirm this prediction, with vibrational decay in BH+ and AlH+ becoming important at short and long times, respectively, and vibrational decay in SrF unimportant on cooling timescales.
Figure 3. Energy level schematic of the proposed 2n-2£ transition showing transitions that result in either dissociation or occupation of a rotational state outside of the closed transition.
2.3. Other decay paths
The discussion so far has relied on the Born-Oppenheimer approximation and only considered dipole-allowed electronic transitions. These approximations reliably describe the molecules of interest for timescales up to a few microseconds. However, long ion-trap lifetimes require the consideration of slow processes driven by violations of the model.
Ideally, the A state is stable against dissociation. However, constant excitation of the molecule allows for rare events to occur, such as dissociation via the repulsive part of the ground state potential. This dissociation can be caused by spin-orbit coupling or L-uncoupling, which violates the Born-Oppenheimer approximation by mixing the ground and excited electronic state potentials [45]. In the case of L-uncoupling, it is possible to pick excited states such that symmetry effects forbid dissociation from occurring. Spin-orbit coupling also exists, but the calculated dissociation rate ranges from kHz to Hz and, to our knowledge, has never been measured. (In most molecular beam experiments these rare events are unobserved, since the dissociation timescales are long compared with laser-interaction timescales.) It is also possible that cooling or repumping lasers can cause photodissociation by coupling to the A state to a higher-lying dissociative state.
Additionally, transitions that are not electric dipole allowed can occur due to mixing of parity states by electric fields. For a compensated ion trap, the stray DC electric field at the axis can be effectively canceled [46]. Population of opposite parity states is inevitable in the long-time limit, due to magnetic dipole transitions (those of A J = 0, ±1 terminate on states of the wrong parity). These transitions occur at a rate of 1 transition or less in 105, also requiring attention when forming a highly closed cooling cycle [3].
Figure 3 graphically summarizes all the mechanisms by which the diatomic molecular ion can leave the cycling transition, including optical photon emission to an excited vibrational state, followed by the emission of an infrared photon that transfers the population to a lower vibrational state. As mentioned in the previous subsection, while this process leads to diffusion
Figure 4. Born-Oppenheimer potential energy functions of the first two doublet electronic states of BH+ at the FCI(3e-)/aug-cc-pV5Z level. Black dots are the actual FCI calculation results, and the solid lines are splines with analytical functions, as described in the LEVEL 8.0 and BCONT 2.2 manuals [64, 65]. The exact numerical solution of the radial Schrodinger equation yields the plotted wave functions, which are used to calculate the various decay and emission rates described in the text.
in the rotational states and introduces states of additional parity, it may ease the experimental requirements by reducing the number of vibrational bands that must be addressed.
3. Example laser cooling cycle for BH+ and AlH+
The system of both BH+ and AlH+ are well studied in the emission from hollow cathode discharges [30, 47, 48], and chemiluminescent ion-molecule reactions [35, 49] have yielded detailed rotational constants for the first two vibrational levels of the X2 D + ground state, along with low-resolution spectra of a few diagonal vibrational bands. For the BH+ cation, 15 rovibrational energy levels of the ground state (v = 0-4 and N = 0-2) have been assigned by the extrapolation of photo-selected high-Rydberg series of neutral BH [32]. During those experiments, a very strong optical excitation of the ion core was observed when the energy of the scanning laser matched the A-X transition of the cation [50]. Photon absorption rates observed during this isolated core excitation were consistent with Einstein transition probability coefficients calculated theoretically by Klein et al [29]. Another favorable property of this molecular ion is that the first quartet states are energetically above the A-X system [31, 51],
eliminating the possibility of intervening electronic states of different spins to which the upper state could radiate and terminate the cycling transition. With all this information, it is possible to consider a laser cooling experiment for trapped BH+ ions. Similarly, we expect AlH+ to share most of these favorable characteristics.
3.1. Methods
3.1.1. Ab initio calculations for BH + and AlH+. To study the processes outlined in section 2, potential energy functions and dipole moments for the X2£+, A2n, B2£ + and a4n states of BH+ and AlH+ are required. Additionally, the transition dipole moments and spin-orbit coupling matrix elements between these states are needed. For BH+, it is possible to solve the electronic Schrodinger equation exactly (within a basis) through full configuration interaction (FCI). By constraining the 1s orbital of B to be doubly occupied, a large aug-cc-pV5Z basis [52] can be used to obtain the potential energy functions. The spectroscopic constants obtained at this level of theory [FCI(3e-)/aug-cc-pV5Z] accurately reproduce the experimentally determined values (shown in table 1), and were computed by fitting a fourth-order polynomial to five energy points centered at Re and evenly spaced by 0.005 A. For AlH+, FCI was not applicable because correlating the important 2s and 2p electrons of Al leads to too many determinants in the expansion of the wave function. Coupled-cluster methods and their equation-of-motion variants provide a practical alternative [53, 54]. In particular, CC3 and EOM-CC3 [55] were applied with an aug-cc-pCVQZ basis set [52]. Again, the 1s orbital was constrained to be doubly occupied. The FCI and EOM-CC3 computations were performed with PSI 3.4 [56].
Dipole moments, transition dipole moments and spin-orbit coupling matrix elements were obtained from multireference CI wavefunctions (MRCI). The MRCI wavefunctions add single and double excitations to a state-averaged complete active space self-consistent field (SA-CASSCF) reference [57, 58]. The SA-CASSCF orbitals were optimized with an active space consisting of three electrons in the 2a 1nx 1ny 3a4a5a and 4a2nx2ny 5a6a7a orbitals for BH+ and AlH+, respectively. In the MRCI wavefunctions, single and double excitations from the 1s B orbital and from the 2s and 2p Al orbitals were allowed. Again, the aug-cc-pV5Z and aug-cc-pCVQZ basis sets were used for BH+ and AlH+, respectively. The MRCI computations were performed with MOLPRO [59].
All of the PECs, dipole moments, transition dipole moments and spin-orbit matrix elements (doublet-doublet, quartet-doublet and quartet-quartet) generated by the calculations described in this section have been included as a supplementary file (available from
3.1.2. The Einstein A and B coefficients. The Einstein A coefficients connecting two states are calculated using
Au,i = .-^3- J- Su,, DU,i, (1)
3h n c3 602 Ju+1 ,
where „u,,/2n is the transition frequency, c is the speed of light, e0 is the permittivity of free space, gu,, is a degeneracy factor, Ju is the total angular momentum of the upper state, Su,, is the Honl-London factor, and Du l is the transition dipole-moment matrix element. The B coefficient
Table 1. Spectroscopic constants for the X2E+ and A2n states of BH+ and A1H+. Previous theoretical values and the most precise experimental values, where available, are also shown for comparison. Re is the equilibrium bond length, Be, De and ote parameterize the rotational energy level spacings [42], ooe and coexe are the harmonic and anharmonic frequencies of the potential at equilibrium and D0 is the dissociation energy. The units are cm-1 unless otherwise specified.
State Method Reference Re (A) Be De ae (De (DeXe D0 (eV)
X2S+ MRDCI [28] 1.2059 12.552 0.542 2515.2 85.86 1.86
MC-SCF [29] 1.21 12.48 0.475 2492 64 1.78
MC-SCF + CI [31] 1.208 12.53 0.393 2594.8 74.94 1.92
MC-SCF [60] 1.2039 12.57 0.00135 0.426 2422.7 74.61
CCSDT(FC)-CBS / mix [61] 1.2047 12.58 2524.7 64.4 1.99
MRDCI [62] 1.204 12.59 2548 74.8 1.97
MRCISD/aug-cc-pV5Z 1.20498 12.574 0.00125 0.47 2518.4 64.7
FCI(3e~)/aug-cc-pV5Z 1.20484 12.578 0.00125 0.47 2519.4 64.6 1.99
Experiment [30] 1.20292 12.6177 0.001225 0.4928 2526.8a 61.98a 1.95 ± 0.09b
A2n MRDCI [28] 1.2158 11.649 0.501 2228.3 66.88 3.33
MC-SCF [29] 1.253 11.62 0.467 2212 52 3.2
MC-SCF + CI [31] 1.247 11.76 0.41 2351.8 71.38 3.19
MRDCI [62] 1.247 11.75 2257 52 3.3
MRCISD/aug-cc-pV5Z 1.24770 11.728 0.00128 0.46 2245.0 53.4
FCI(3e-)/aug-cc-pV5Z 1.24648 11.751 0.00128 0.46 2251.6 53.6 3.35
Experiment [30] 1.24397 11.7987 0.4543
B2S+ MRDCI [28] 1.9036 5.037 0.075 1263 33 1.28
SDT-CI [28] 1.9031 5.04 0.074 1258 31 1.28
MC-SCF [29] 1.912 4.99 0.097 1235 32 1.26
MC-SCF + CI [31] 1.91 5.01 0.073 1206 19 1.24
MRDCI [62] 1.889 5.12 1285 32 1.31
MRCISD/aug-cc-pV5Z 1.90180 5.048 0.000322 0.074 1264.4 30.5
FCI(3e~)/aug-cc-pV5Z 1.90116 5.051 0.000323 0.075 1264.3 29.9 1.35
Table 1. Continued.
State Method Reference Re( A) Be De ae coe coexe Do(eV)
X2S+ MRDCI [34] 1.6098 6.698 0.318 1684 81 0.666
MC-SCF [29] 1.608 6.71 0.317 1680 71 0.74
MCQDPT [37] 1.6 6.78 0.3945 1600 82 0.92
MRCISD/aug-cc-pCVQZ 1.60802 6.711 0.000422 0.30 1692.4 66.2
CC3/aug-cc-pCVQZ 1.60543 6.732 0.000434 0.31 1677.0 69.1 0.73
Experiment [36] 1.605 6.736 0.0004469 0.382 1654 74
A2n MRDCI [34] 1.6047 6.741 0.248 1727 54 1.75
MC-SCF [29] 1.609 6.7 0.251 1683 42 1.78
MCQDPTC [37] 1.6 6.7779 0.2668/0.2489 1703/1743 47/45 1.86/1.82
MRCISD/aug-cc-pCVQZ 1.60375 6.746 0.000412 0.25 1726.3 33.2
EOM-CC3/aug-cc-pCVQZ 1.59126 6.853 0.000416 0.23 1759.3 42.8 1.90
Experiment [36] 1.595 6.817 0.0004152 0.243 1747 43
B2S+ MRDCI [34] 2.0582 4.097 0.046 1322 19 1.4
MC-SCF [29] 2.063 4.08 0.043 1326 21 1.4
MRCISD/aug-cc-pCVQZ 2.05914 4.092 0.000155 0.039 1330.5 21
EOM-CC3/aug-cc-pCVQZ 2.03038 4.209 0.000158 0.039 1375.9 23 1.43
a [32]; an ¿y^ of ~ -2 cm 1 has been determined experimentally. b [63].
cValues are for the 2ni/2/2n3/2, respectively.
is calculated from the A coefficient using
n 2 c3
Bu,i = -T- Au,i. (2)
3.1.3. The rate-equation model. A rate-equation approach was used to model the population dynamics of BH+ and AlH+, allowing the determination of the average number of Doppler-cooling photons scattered before a molecule is pumped into a dark state (ignoring predissociation and photodissociation described in section 3.3.3). A similar approach has been used in internal-state cooling [24-26]. In particular, we solved
— = MP, (3)
where P is a vector consisting of the N rovibrational levels we include in our model, arranged in order of by increasing energy, and M is an N x N vector consisting of the Einstein A and B coefficients which couple the different rovibrational states. Explicitly, the population of a given state follows
d P j=-1 j=-1 j=N
-¿j- = - E AijPi - E Bijpk-)Pi - E BijPK-)Pi
j = 1 J = 1 J=i+1
J=N j=i-1 j=N
+ E AjiPj + E Bji Ph-I ) Pj + E Bji PH-I ) Pj . (4)
J=i+1 J = 1 J =i+1
Here, the ith and jth states are connected by the Einstein coefficients denoted by Aij, Bij and Bji, which correspond to spontaneous emission, stimulated emission and absorption, respectively, and p(^ij) is the spectral energy density at a given frequency, Mij. The Einstein coefficients were calculated using the PECs and dipole moments discussed in section 3.1.1.
The average number of Doppler cooling scattered photons was calculated by numerically solving equation (3) and multiplying the population in the excited state of the cycling transition by its spontaneous emission rate. Counting the photons emitted into the ground state from the excited state, regardless of whether the excited state was populated by a cycling-laser photon or a repumping-laser photon, is equivalent to counting the number of cooling-cycle absorption events. A simple estimate of the number of scattering events required for cooling can be obtained by considering the average energy lost per scattering event. For a laser detuning r/2 from resonance, where T is the natural linewidth, then the average energy lost per scatter is AE = hT/2. The linewidths of BH+ and AlH+ are T = 2n x 0.7 MHz and T = 2n x 2.6 MHz, respectively. Starting from T = 300 K, cooling to mK requires N ^ 106 scattering events for both BH+ and AlH+ for our laser detuning. We may also take advantage of the Doppler width at higher temperatures by detuning the laser further from resonance to increase the energy lost per scattering event, reducing the laser detuning as the ions are cooled. Using this method, we estimate that a lower limit of N ^ 104 scattering events are required to cool to mK.
Finally, it should be noted that a rate-equation approach does not account for coherence effects that can result in dark states. For example, in atomic systems with a low-lying D state, two lasers are typically used for Doppler cooling to address an S ^ P (cooling) transition and a D ^ P (repumping) transition. When both lasers are detuned from resonance by similar amounts, dark resonances occur and cycling terminates [66]. The cooling and repumping
Figure 5. Energy level schematic diagram showing cooling and repumping transitions for the case of negligible vibrational and rotational relaxation within the excited- and ground-state manifolds (not to scale). Ground and excited states are labeled as |gP ' J,) and lep J), respectively.
transitions in our molecular system also exhibit a similar A-system (see figures 5 and 6). Experimentally, these dark resonances can be avoided either by explicitly varying the laser detuning or by shifting the energy levels by different amounts with an externally applied time-varying field [66, 67].
3.2. Cooling scheme
The general cooling scheme, ignoring spin, was discussed in section 2 and figure 2. The inclusion of spin results in spin-rotation splitting for the 2 £ + states and spin-orbit splitting of the 2 n states [42]. We label the X2 £ + and A 2 ni/2 states as |g£' J) and |e£ J), respectively. The Zeeman substructure is explicitly ignored in our model; however, the resulting reduction in the overall scattering rate due to differing multiplicities of the ground and excited states is included as a degeneracy factor in the Einstein A and B coefficients. In practice, an externally applied magnetic field of a few Gauss is sufficient to lift the degeneracy, and the laser polarization can be chosen to ensure that all Zeeman sublevels are addressed [66, 67]. Additionally, both AlH+ and BH+ have hyperfine structure, which is ignored in our model.
We assume that population begins in the |g0 1/2) state and a laser drives transitions to the |e+ X/2) state. From the excited state, population may decay into either the |g- ) state or the |g- 3/2) state, with decreasing probability for increasing v'. The cooling transition,
e0Jiz)' e0,1/2)' e0,1/2)
\9w) 19wz) \9lm)
Figure 6. Energy level schematic diagram showing cooling and repumping transitions for the case of non-negligible vibrational and rotational transitions within the excited- and ground-state manifolds (not to scale).
corresponding to v' = 0, requires driving both the |g-1/2> «—> |e+ 1/2> and |g_3/2> «—> |e+ 1/2> transitions, as shown in figure 5 with labels C1 and C2. The ground states belong to the same rotational level (K' = 1), so this cooling scheme is equivalent to the one discussed in section 2.2.
If vibrational levels with v' ^ 1 are long-lived, then decay between vibrational levels, which result in parity flips, can be ignored. In this case, we drive |g_ 1/2> «—> |e+,_1 1/2> and |g_ 3/2> «—^ |e+ _1 1/2> transitions to pump population back into the cycling transition, as shown by the R1, R2 and unlabeled arrows in figure 5.
However, when decay between vibrational levels is fast, the resulting parity flips may populate the \g+ Jf> states, so lasers connecting these states to an excited state must also be included to avoid population buildup in the even-parity states. Multi-stage vibrational decay also results in angular-momentum diffusion. For instance, decay from the |e+ 1/2> populates the \g_3/2> state, which decays into the \g+ 5/2> state. Population is pumped out of states with J' > 3/2 by driving |g±j/> «—> |ejjr_ 1 >. This cooling scheme is illustrated in figure 6.
In figures 7 and 9, we show the rotationless lifetimes for BH+ and AlH+, respectively. Since both AlH+ and BH+ have relatively high decay rates (up to 100 s-1) between vibrational levels, we find that they require cycling and repumping on both parities. In the following section, we examine in detail each molecule separately, and determine their repumping requirements. Predissociation and photodissociation rates are considered in the last subsection.
3.3. Simulation results
3.3.1. BH+. In figure 8, the total number of scattered photons for different laser configurations is plotted. Population is initially in the |g_1/2> state. The simulation includes dipole-allowed transitions between electronic states, and vibrational and rotational transitions within the X2S+
Figure 7. Energy level schematic diagram showing rotationless lifetimes of different BH+ levels (not to scale). The lifetimes for A ^ X transitions are denoted by t and the lifetimes for transitions within the ground state are denoted by t'. All lifetimes are calculated using the ab initio PECs and dipole moments discussed in section 3.1.
state. Blackbody radiation is also included in the model, for a temperature of 300 K. In case (a-i), two wavelengths are used to drive the odd-parity cycling transitions: C1 and C2 in figures 5 and 6. Population is quickly pumped into the v' = 1 manifold, and a total of N ^ 70 photons are scattered. In case (a-ii), an additional two wavelengths are introduced to drive the even-parity cycling transitions: C3 and C4 (see figure 6). Again, population is quickly pumped into the v' = 1 manifold; however, at longer times (t ^ r{0), population leaks back into the even-parity ground states and is driven by the even-parity cycling transitions, resulting in a total of N ^ 380 scattered photons. The delay between successive photon count plateaus is due to population building up in the v' = 1 manifold.
Case (b-i) includes the transitions from case (a-i), plus an additional two wavelengths to drive the odd-parity repumping transitions: R1 and R2 in figures 5 and 6. N ^ 8000 photons are scattered, with the dominant dark-state leak being the even-parity ground states. Case (b-ii) extends case (a-ii) with an additional four wavelengths to drive even- and odd-parity repumping transitions: R1-R4. A total of N ^ 21 000 photons are scattered, which plateaus at just above the swept-detuning threshold.
As predicted in section 3.2, vibrational decay plays a significant role in depopulating the higher-lying vibrational ground states, resulting in parity flips and rotational diffusion. In case (b-ii), the parity-flip issue is addressed, so population built up in the |g±5/2) states. Case (c-ii) includes the same wavelengths as case (b-ii) and also two additional wavelengths to drive odd- and even-parity P-branch repumps: PR1, PR2 (see figure 6). The increase in counts (N ^ 600 000), which is well above the swept-detuning threshold required for significant cooling from room temperature.
Figure 8. Plot of the integrated cycling-transition photon count for the BH+ simulation using different laser configurations, driving the subsets of transitions shown in figure 6. The hatched region corresponds to the upper and lower limits for Doppler cooling to mK temperatures, corresponding to fixed and swept detuning, respectively.
3.3.2. AlH+. The simulation of AlH+ is plotted in figure 10. Again, we assume that population starts in the \g——1/2) state. The simulations include transitions between electronic states, within the ground electronic state, and include a blackbody source of radiation at 300 K. In case (a), four wavelengths are used to drive only the cycling transition: C1-C4. A total of N & 200 photons are scattered.
In case (b) an additional four wavelengths are used to drive the repumping transitions: R1-R4. There is a significant increase in counts (N & 15 000), and a noticeable deviation from the results of BH+. At t & 10—3 s, there is a reduction in rate, resulting from population building up in the v' — 2 level. For BH+ the rate of populating v' — 2 is 55 times lower than the decay from v' — 2 to v' — 1 (t02/t2 1 = 55). For AlH+ the rate of populating v' — 2 is 500 times higher than the decay from v' — 2 to v' — 1 (T02/t? 1 — 1/500), resulting in population trapping.
In case (c) an additional four wavelengths are applied to drive the repumping transitions: R5-R8. This results in a total of N & 210000 photon counts, which is well above the swept-detuning cooling threshold. Interestingly, the second vibrational repump is sufficient for the cooling of AlH+, whereas BH+ requires optical pumping of the |g±/2) states.
In case (d) A J — —1 repumping transitions are included: PR1 and PR2. The resultant photon count (N & 700 000), which is similar to the number of counts we obtain in case (c-ii) of BH+. We find that the results of AlH+ are similar to those of BH+ in that they each require repumping of both parities, due to vibrational decay; however, AlH+ differs in that it requires an additional vibrational repump while not requiring additional rotational repumping.
Figure 9. Energy level schematic showing rotationless lifetimes of different AlH+ levels (not to scale). The lifetimes for X ^A transitions are denoted by t , and the lifetimes for transitions within the ground state are denoted by t'. All lifetimes are calculated using the ab initio PECs and dipole moments discussed in section 3.1.
" 10"6 10"5 10"4 10"3 10'2 10"1 10° io1
time (s)
Figure 10. Plot of the integrated photon count for the AlH+ simulation using different laser configurations, driving the subsets of transitions shown in figure 6. The hatched region corresponds to the upper and lower limits for Doppler cooling to mK temperatures, corresponding to fixed and swept detuning, respectively.
3.3.3. Calculation of predissociation and photofragmentation rates. The cooling scheme requires a continuous repopulation of the vibrational ground level of the A2 n excited electronic state. One possible mechanism that stops the cooling cycle is the isoenergetic predissociation of the molecular ion by the coupling of the bound excited state to a dissociative state of the X2£ + repulsive wall. Bound and continuum wavefunctions can interact via spin-orbit and rotational-orbit couplings. The expression for such predissociation rates, in units of s-1, is obtained from Fermi's golden rule [64]
where Ms (R) is a term in the molecular Hamiltonian, which is neglected in the Born-Oppenheimer approximation and describes the coupling between the initial and final electronic states. The |^v,J (R)) initial-state wavefunction is space-normalized and the e, J' (R)) wavefunction of the continuum is energy-normalized [45]. The calculation uses a one-dimensional (1D) density of states in the bond coordinate to determine the energy normalization, d(E) a VX/(E — D), where x is the reduced mass and D is the dissociation energy of the final electronic state [64].
The predissociation depends strongly on the slope of the repulsive part of the dissociative potential. In order to estimate the uncertainty in the predissociation calculations, we first calculate the overlap integral of the wavefunctions assuming and multiplying by a bond-length-independent spin-orbit coupling of 13.9cm-1 [30]. For BH+, by varying the details of the bound-state wavefunction, we find that the predissociation rate varies by more than four orders of magnitude from 102 to 10—2s—1. Representing the A2 n(v = 0) wavefunction by a symmetric Gaussian from the solution of the harmonic oscillator, the rate results in the most disastrous scenario, 4.4 x 102s—1. Using the solution of a Morse potential gives 6.2 x 10—2 s—1. Combining the continuum and bound wave functions computed using LEVEL 8.0 and BCONT 2.2, respectively, and by inputting the FCI(3e—)/aug-cc-pV5Z A2n and X2£ + PECs, the predissociation rate is 1.6 x 10—1 s—1 (potentials and wavefunctions shown in figure 4). Including the spin-orbit coupling function calculated in section 3.1.1, as prescribed in equation (5), the predissociation rate results in a value of 0.03 ± 0.02s—1. For AlH+ (using EOM-CC3 potential energy functions and MRCI spin-orbit coupling matrix elements), the predissociation rate is 0.2 ± 0.1 s—1. The quoted uncertainties come from interpolation noise in both the repulsive part of the X2 £ + state and the point piecewise polynomial used to fit the R-parametrized spin-orbit couplings. This uncertainty due to interpolation is negligible for the calculation of optical decay rates 10 parts per million) reported elsewhere in this paper.
Another problematic event is the excitation from the A2 n state to either bound or dissociative B2 £ + levels, by absorption of a cooling or repump photon. For BH+, PECs from two different ab initio levels of theory place the dissociation limit, Vlim, either below or above the energy accessible by the 379 and 417 nm photons excited from the A state (see table 2 and figure 11). Although both the results agree with predictions based on experimental observations (due to a 2000 cm-1 uncertainty), these calculations yield two different photon absorption scenarios. The first (MC-SCF) would photodissociate the molecule, and the second (FCI) would steal population from the cooling cycle to a bound vibrationally excited level of the B2£ + state. For AlH+, MC-SCF and EOM-CC3 predict Vlim to be below the total energy accessed by any of the three required cooling and repump wavelengths (360, 381 and 376 nm).
ks (v, J) = — K^e, j' ( R )| Ms ( R)|^v, j ( R ))|2,
Table 2. Energies and rate coefficients relevant to possible photofragmentation events. hw (T), A (T) and Is (T) correspond to the photon energy, the Einstein A coefficient and saturation intensity for each transition T, as labeled in figures 5 and 6. The BH+ cooling schemes do not involve the transition R5. M+ (3P)-M+ (XS) is the energy difference between the lowest singlet and triplet energy states of B+ or Al+. This should correspond to the energy difference between the dissociative asymptotes of the X2 D + ground state and the common A2n and B2D + excited states Vlim. ^total is the energy of the A2n (v = 0, J = 1 /2) state plus the photon energy of the transition T. Photofragmentation does not occur in the FCI(3e-)/aug-cc-pV5Z) calculation for BH+ (see text), but the X2D + De does not preclude either possibility due to current experimental uncertainties.
BH+ AlH+
MC-SCFa FCI Observedb MC-SCF a /aug-cc-pCVQZ Observed
X2 £+(v = 0, K = 1) 1223 1268 1272 810 848
X2 £+(v = 2, K = 1) 3669 3702
h a (C1) [cm-1] 27 594 26547 26 356 28 521 27 747 27 673
h a (R1) [cm-1] 25 239 24163 23 960 27 001 26225
h a (R5) [cm-1] 27 255 26566
A (C1) [s-1] 2.9 x 106 2.8x106 1.1 x107 1.1 x 107
A (R1) [s-1] 3.8x104 3.9x104 1.3x105 2.9 x 105
A (R5) [s-1] 4.8 x 105 8.7 x 105
Is (C1) [photons/(cm2 s)] 2.3 x 1015 2.0 x 1015 9.6 x 1015 8.8 x 1015
Is (R1) [photons/(cm2 s)] 2.6 x 1013 2.4 x 1013 1.0 x 1014 2.1 x 1014
Is (R5) [photons/(cm2 s)] 3.7 x 1014 6.4 x 1014
X2 £ + De [cm-1] 15 645 17 261 17 000 ± 2000 5998 6694 « 6300
M+ (3P)-M+ (1S) [cm-1] 38 068 37 546 37 342 37 003 37 206 37 454
A and B Viim [cm-1] 53713 54 807 54450 ± 2000 43 000 43 900 ~43 750
ETotal (C1) [cm-1] 56411 54 362 53 984 57 852 56 342
ETotal (R1) [cm-1] 54 056 51978 51 588 56 332 54 820
ETotal (R5) [cm-1] 58179 56 834
Gabs (C1) [cm2] 1.1 x 10-20 7.2 x 10-23 3.5 x 10-22
Oabs (R1) [cm2] 7.5 x 10-21 1.4 x 10-22 7.1 x 10-22
Gabs (R5) [cm2] 5.4 x 10-22 2.4 x 10-21
¿diss (C1) [s-1] 2.6 x 10-5 6.9 x 10-7 3.1 x 10-6
¿diss (R1) [s-1] 1.9 x 10-7 1.4 x 10-8 1.5 x 10-7
¿diss (R5) [s-1] 2.0 x 10-7 1.6 x 10-6
a [29].
b [30, 32, 63]. c [47, 63].
IOP Institute of Physics <j)Deutsche physikalische gesellschaft (a) (b)
Figure 11. Sensitivity of problematic two-photon channels to precise details of the molecular structure. Potential energy functions of the A = 1 electronic states of BH+ are plotted at the (a) MC-SCF level from [29] and (b) FCI(3e-)/ aug-cc-pV5Z. Black dots are the actual ab initio results and the solid lines are splines with analytical functions described in the BCONT 2.2 and LEVEL 8.0 manuals [64, 65]. The wavefunctions shown, which are used to calculate photofragmentation or photoabsorption rates, are positioned at the eigenvalues that solve the radial Schrodinger equation. The length of the arrows represents the actual experimental energy of the C1 and R1 photons.
The photodissociation cross-section (in cm2 permolecule), is [68] 2n— x—^ SJ o
^(v, J) = iJrK^E,J'(R)lMs(RJ(R))|2. (6)
3hc 2 J + 1
S is the usual Honl-London rotational intensity factor ( S J'/(2 J + 1) is set to 1 in this section), and Ms (R) is the transition moment function, including the ratio of initial to final-state electronic degeneracy factors. Allowed values of J' are given by the usual rotational selection rule. Similarly to equation (5), this expression assumes that the continuum radial wavefunction amplitude is energy normalized.
Assuming saturation intensities, calculated from the Einstein A coefficients for the main cycling and repump transitions, we calculate the photon flux for each transition, Is (T), listed in table 2. The photodissociation rates are the multiplication of these laser intensities by the cross-sections calculated from equation (6). Continuum and bound wavefunctions were calculated using BCONT 2.2 and LEVEL 8.0, respectively, and using the EOM-CC3 PECs and the MRCI transition moment functions. We perform simulations using the MC-SCF potential energies and transition dipole moments from [29] for comparison. We determine that photodissociation occurs on a timescale of tens of hours for BH+ and several days for AlH+.
For BH+, the FCI PECs predict that the cycling and repump wavelengths result in an excitation from the A2n state into a bound level of the B2D + state. We believe that the probability of resonant absorption to the B2D + state is small due to the narrow laser linewidths relative to the vibrational energy spacing and due to the poor Franck-Condon overlap. The most intense B - A band reaching this energy region of the B2 D + state is the (11, 0) with the value for A of 1.6 x 103 s-1. Population of this state results in vibrational diffusion, which is not repumped in our cycling scheme.
Given the 2000 cm-1 experimental uncertainty in the dissociation energy, we do not have enough information to determine if the photoabsorption (predicted by FCI) or photodissociation (according to MC-SCF) processes will present a problem for Doppler cooling.
4. Prospects for cooling
Our simulations show that we require a total of eight wavelengths to Doppler cool BH+ and AlH+ (see figures 8 and 10); however, cooling does not necessarily require eight unique lasers. For both ions, two of the cycling transitions, C1 and C2, can be addressed by a single laser, since these two wavelengths differ only by the spin-rotation splitting (Awsr ^ 540 MHz for BH+ [30] and Awsr ^ 1.7 GHz for AlH+ [47]). This splitting can be achieved from a single laser source by using either an acousto-optic modulator (AOM) or an electro-optic modulator (EOM). The other two cycling transitions, C3 and C4, each require their own source since their splitting is large (Aw ^ 1 THz for BH+ and Aw ^ 2THz for AlH+). The cycling transitions require CW lasers that are narrow relative to the electronic linewidth.
BH+ requires driving the transitions R1-4, corresponding to repumping v7 = 1. These transitions also span 1 THz, so they would require a total of three CW lasers, since spin-rotation splitting between R2 and R3 can be bridged by an AOM. However, repumping on all four transitions can be achieved with a single-femtosecond pulsed laser, with a typical bandwidth of 100cm-1 (3 THz). Using femtosecond repumps generally requires optical pulse shaping, in order to avoid R-branch (rotational heating) transitions. Pulse-shaped pumping has been used to achieve vibrational cooling of Cs2 molecules, with a resolution of 1 cm-1 (30 GHz) [69]— which is sufficient resolution for rotationally resolved repumping of hydrides. A femtosecond laser can also be used to drive the P-branch repumping transitions PR1 and PR2; however, these transitions would require a second laser since the R1-4 transitions differ from the PR1 and PR2 by more than 100 cm-1 (3 THz). It should noted that repumping of PR1 and PR2 may not be necessary, since the total photon count exceeds the number required for Doppler cooling in the swept-detuning case (see figure 8).
AlH+ requires driving transitions R1-4, corresponding to repumping v' = 1 and R5-8, corresponding to repumping v' = 2. The repump R1-4 span 2THz, which is within the bandwidth of a femtosecond laser. Transitions R5-8 also span 2 THz; however, they do not overlap with transitions R1-4 (Aw ^ 9 THz), so they would require a second femtosecond laser.
If a broadband source for repumping is used in conjunction with CW lasers for cooling, the laser requirements for BH+ and AlH+ are reduced to five and six lasers driving 10 and 14 different transitions, respectively. For BH+, repumping transitions R1-R4 require one femtosecond laser, and repumping transitions PR1-2 require a second femtosecond laser. For AlH+, repumping transitions R1-R4 require one femtosecond laser, and repumping transitions R5-R8 require a second femtosecond laser. The PR1-2 wavelengths would require a third femtosecond laser, but they are not strictly necessary (see figure 10).
The comb-like nature of the frequency components of a femtosecond laser can be problematic if a transition which we wish to drive happens to fall between two comb lines (typically separated by 80MHz). Various modulation techniques, either internal or external to the femtosecond laser, can be used to address this complication.
5. Conclusions
The key element for direct laser cooling of molecules [1-4] and molecular ions are diagonal FCFs. From our survey of a number of molecules (see the appendix) we determined BH+ and AlH+ to be favorable candidates since a probabilistic FCF analysis (excluding vibrational decay) suggested that three and two vibrational bands, respectively, needed to be addressed to scatter 104 photons. A detailed analysis of state population dynamics (including vibrational decay) has shown that BH+ requires addressing two vibrational bands while AlH+ requires addressing three. For both molecular ions we find that vibrational decays result in diffusion of both parity and angular momentum; thus, implementation of this cooling scheme requires addressing the molecules with a large number of repump wavelengths.
To minimize the total number of lasers required, a femtosecond laser can be used, since multiple transitions fall within the laser bandwidth. However, for many of these transitions, the frequency of excitation needs to first be experimentally determined with higher accuracy. The spectroscopy can be performed and the direct cooling technique optimized by initially trapping the molecular ions with laser-cooled atomic ions and monitoring the sympathetic heating and cooling of the atomic ion by the molecular ion [70]. Ultimately, dissociation of the molecular ion represents a fundamental limit for the continuous Doppler cooling of the species presented here.
The attempted laser cooling of BH+ and AlH+ will result in the first directly laser-cooled molecular ions and/or accurate measurement of rare predissociation events. Either way, it will improve our knowledge of the spectroscopic splittings of BH+ and AlH+ by an order of magnitude. Together with experimental dissociation rates, measurements of these splittings yield the most stringent test of advanced electronic structure theories; as the five-valence-electron open-shell diatomic BH+ is among the simplest stable molecules and serves as a prototype for the development and testing of such theories [28, 29, 31, 51], [61-63], [71-73].
More generally, any molecular ion can be sympathetically cooled, and the long trapping lifetime allows slow processes to be investigated. Thus, sympathetically cooled molecular ions could serve as a valuable tool in determining the rates of higher-order processes such as predissociation and photofragmentation, which are of concern for direct Doppler cooling applications.
We thank Eric Hudson for bringing to our attention the potential problem of predissociation and for helping us with initial calculations of predissociation rates. KRB and CRV were supported by Georgia Tech and the NSF (CHE-1037992). JHVN and BO were supported by the David and Lucile Packard Foundation (grant no. 2009-34713) and the AFOSR (grant no. FA 9550-10-1-0221).
Appendix. Potential candidates
Doppler cooling candidates are divided into two main categories. The first category corresponds to transitions in which the electron is excited from the HOMO to the LUMO, and the second category corresponds to transitions in which a hole moves from one orbital to another. Each category is subdivided according to the grouping on the periodic table. Groups that we discuss are chosen, based on chemical intuition, to form open-shells with vertical excitations that do not perturb the chemical bond. Probabilistic predictions in this appendix are based on FCFs obtained from Morse potentials defined by spectroscopic constants available in the literature. Detailed population-dynamics studies are needed for a more rigorous selection.
A.1. Diatomic molecular ions with transitions that excite a single electron from the HOMO to the LUMO
A.1.1. Group 3 hydrides and halides. ScH+ has an interesting 2 A- 2 n system with very similar spectroscopic constants for both electronic states; however, the transition is deep into the IR (A = 6 ^m) [74]. The ground state of YH + is a 2D and the first excited state is a 2A. Spectroscopic constants re and we are similar but the transitions fall in the IR at ^ 3.3 ^m [75]. Finally, LaH+ has a 2 D + - 2 A system with similar spectroscopic constants and the transition is at ^ 5.1 ^m [76].
A.1.2. Group 13 hydrides and halides. The open-shells from Group 13 and halogens are stable with 2D ground states. Unfortunately, the 2n potential curve is dissociative. This is common in the following examples: AlCl+, AlF+ [77, 78] and GaCl+. Although GaF+ is believed to have strong ionic character [79] it also has a dissociative first excited state [80]. The hydrides BH+ and AlH+ seem to be suitable for direct laser cooling and are the subject of the present paper. Experimental spectroscopic data for GaH+ are not yet available [81].
A.1.3. Bialkalis, alkali monohalides, alkali monohydrides and molecular ions formed by alkalis and Group 13. Li+ and LiNa+ have 2 D+ and 2 D + ground electronic states with only one valence electron, and they do not have valence correlation energy [82]. Their ground state dissociation energies are relatively high (10000 cm-1) and they do not seem to have dissociative states that couple non-adiabatically to the system. The lack of spectroscopic data on these systems stop us from making further predictions, but it would not be surprising if a few of these molecular ions have diagonal FCFs. These systems cannot be ruled out as potential candidates. The same seems to be true for LiAl+ with a bound ground state and a dissociation energy of roughly 1 eV [82]. LiB+ has a weaker bond and the A and X state spectroscopic constants are different [83].
With respect to the alkali monohydrides, although they are technically open-shells with one unpaired electron spin, they have very weak bonds. This is expected as the same unpaired electron shares some density to participate in the bond (e.g. KH+ [84] and NaH+ [85]).
A.1.4. Group 15 halides and hydrides. PH+ and PF+ would require repumping of six and ten vibrational bands to scatter 2 x 104 and 9 x 104 photons, respectively. They do not have nuclear spin and their ground states are 2 n. The first excited state of PH+ is a 2 A and the vertical transition from the ground state is at 381 nm. For the PF+ case, the A2D-X2n optical transition is in the UV at 282 nm [86].
A.2. Diatomic molecular ions with an unpaired electron in which an optical transition moves a hole to a bonding orbital
A.2.1. Group 17 hydrides. Using Morse potentials derived from spectroscopic constants compiled by Radzigs and Smirnov [86], we notice the A2£-X2n transitions in molecules such as HF+, HCl+ and HBr+ to have very un-diagonal FCFs. However, a recent theoretical proposal to measure molecular parity violation [40] points out that HI+ could have highly diagonal FCFs. This is attributed to the center of mass being very close to the iodine atom, resulting in the molecular orbit having mostly an atomic character. There is not enough spectroscopic information on excited states of HI+ to design a laser cooling experiment [87].
A.2.2. Group 15 diatomic ions. We use FCFs from the spectroscopic constants compiled by Radzigs and Smirnov [86] to test N+ and P+. The former could scatter 11 x 104 photons if irradiated by five lasers. These transitions fall in the near IR with the most energetic transition at 1091 nm. This homonuclear molecular ion has recently been loaded and sympathetically cooled in its rovibrational ground state [23]. Although P+ has a 2nu ground state with similar re and &>e constants as its first2 £+ excited state, the FCFs show that one would have to address more than eight transitions with X > 4.6 ^m to scatter over 104 photons.
A.2.3. Group 5 and 6 halides. These molecules have crowded excited state manifolds and the first electronic states high multiplicity. VF+ has an A4 A-X4 n system with very similar re and &>e constants and a vertical transition at ^ 7 ^m. If it is possible to cope with higher multiplicity (S = 5), CrF+ might work as it has an A5n-X5£ + with an IR transition (X = 2 ^m) [88].
[1] Shuman E S, Barry J F, Glenn D R and DeMille D 2009 Radiative force from optical cycling on a diatomic
molecule Phys. Rev. Lett. 103 223001
[2] Shuman E S, Barry J F and DeMille D 2010 Laser cooling of a diatomic molecule Nature 467 820-3
[3] Di Rosa M D 2004 Laser-cooling molecules—Concept, candidates and supporting hyperfine-resolved
measurements of rotational lines in the A-X(0,0) band of CaH Eur. Phys. J. D 31 395-402
[4] Stuhl B K, Sawyer B C, Wang D and Ye J 2008 Magneto-optical trap for polar molecules Phys. Rev. Lett.
101 243002
[5] Drewsen M, Jensen I, Lindballe J, Nissen N, Martinussen R, Mortensen A, Staanum P and Voigt D 2003 Ion
Coulomb crystals: a tool for studying ion processes Int. J. Mass Spectrom. 229 83-91
[6] Ryjkov V L, Zhao X and Schuessler H A 2006 Sympathetic cooling of fullerene ions by laser-cooled Mg+
ions in a linear rf trap Phys. Rev. A 74 023401
[7] Willitsch S, Bell M T, Gingell A D and Softley T P 2008 Chemical applications of laser- and sympathetically-
cooled ions in ion traps Phys. Chem. Chem. Phys. 10 7200-10
[8] Baba T and Waki I 1996 Cooling and mass-analysis of molecules using laser-cooled atoms Japan. J. Appl.
Phys. 35 L1134-7
[9] Drewsen M, Mortensen A, Martinussen R, Staanum P and Sorensen J L 2004 Nondestructive identification
of cold and extremely localized single molecular ions Phys. Rev. Lett. 93 243201
[10] Koelemeij J C J, Roth B, Wicht A, Ernsting I and Schiller S 2007 Vibrational spectroscopy of HD+ with 2-ppb
accuracy Phys. Rev. Lett. 98 173002
[11] Rosenband T et al Observation of the 1S0 ^3P0 clock transition in 27 Al+ Phys. Rev. Lett. 98 220801
[12] Wolf A L, van den Berg S A, Gohle C, Salumbides E J, Ubachs W and Eikema K S E 2008 Frequency
metrology on the 4s251/2-4p2P1/2 transition in 40Ca+ for a comparison with quasar data Phys. Rev. A 78 032511
[13] Herrmann M, Batteiger V, Knunz S, Saathoff G and Udem Th Hansch T W 2009 Frequency metrology
on single trapped ions in the weak binding limit: the 3s1/2-3p3/2 transition in 24Mg+ Phys. Rev. Lett. 102 013006
[14] Batteiger V, Knunz S, Herrmann M, Saathoff G, Schussler H A, Bernhardt B, Wilken T, Holzwarth R, Hansch
T W and Udem Th 2009 Precision spectroscopy of the 3s-3p fine-structure doublet in Mg+ Phys. Rev. A 80 022503
[15] Wolf A L, van den Berg S A, Ubachs W and Eikema K S E 2009 Direct frequency comb spectroscopy of
trapped ions Phys. Rev. Lett. 102 223901
[16] Hojbjerre K, Offenberg D, Bisgaard C Z, Stapelfeldt H, Staanum P F, Mortensen A and Drewsen
M 2008 Consecutive photodissociation of a single complex molecular ion Phys. Rev. A 77 030702
[17] Offenberg D, Wellers C, Zhang C B, Roth B and Schiller S 2009 Measurement of small photodestruction
rates of cold, charged biomolecules in an ion trap J. Phys. B: At. Mol. Opt. Phys. 42 035101
[18] Roth B, Blythe P, Wenz H, Daerr H and Schiller S 2006 Ion-neutral chemical reactions between ultracold
localized ions and neutral molecules with single-particle resolution Phys. Rev. A 73 042712
[19] Okada K, Wada M, Boesten L, Nakamura T, Katayama I and Ohtani S 2003 Acceleration of the chemical
reaction of trapped Ca+ ions with H2O molecules by laser excitation J. Phys. B: At. Mol. Opt. Phys. 36 33-46
[20] Staanum P F, Hojbjerre K, Wester R and Drewsen M 2008 Probing isotope effects in chemical reactions using
single ions Phys. Rev. Lett. 100 243003
[21] Willitsch S, Bell M T, Gingell A D, Procter S R and Softley T P 2008 Cold reactive collisions between
laser-cooled ions and velocity-selected neutral molecules Phys. Rev. Lett. 100 043203
[22] Gingell A D, Bell M T, Oldham J M, Softley T P and Harvey J N 2010 Cold chemistry with electronically
excited Ca+ Coulomb crystals J. Chem. Phys. 133 194302-13
[23] Tong X, Winney A H and Willitsch S 2010 Sympathetic cooling of molecular ions in selected rotational and
vibrational states produced by threshold photoionization Phys. Rev. Lett. 105 143001
[24] Vogelius I S, Madsen L B and Drewsen M 2004 Rotational cooling of heteronuclear molecular ions with 1D,
2D,3 D and 2 n electronic ground states Phys. Rev. A 70 053412
[25] Staanum P F, Hojbjerre K, Skyt P S, Hansen A K and Drewsen M 2010 Rotational laser cooling of
vibrationally and translationally cold molecular ions Nat. Phys. 6 271-4
[26] Schneider T, Roth B, Duncker H, Ernsting I and Schiller S 2010 All-optical preparation of molecular ions in
the rovibrational ground state Nat. Phys. 6 275-8
[27] Schuster D I, Lev Bishop S, Chuang IL, DeMille D and Schoelkopf R J 2011 Cavity QED in a molecular ion
trap Phys. Rev. A 83 012311
[28] Guest M F and Hirst D M 1981 The potential-energy curves of BH+ Chem. Phys. Lett. 80 131-4
[29] Klein R, Rosmus P and Werner H J 1982 Ab initio calculations of low-lying states of the BH+ and AlH+ ions
J. Chem. Phys. 77 3559-70
[30] Ramsay D A and Sarre P J 1982 High-resolution study of the A2 n-X2 D + band system of BH+ J. Chem. Soc.,
Faraday Trans. 2 78 1331-8
[31] Kusunoki 1 1984 Ab initio calculations of the doublet and quartet states of BH+ Chem. Phys. Lett. 105 175-9
[32] Viteri C R, Gilkison A T, Rixon S J and Grant E R 2006 Rovibrational characterization of X2 D+11BH+ by the
extrapolation of photoselected high Rydberg series in 11BH J. Chem. Phys. 124 144312
[33] Shi D-H, Liu H, Zhang J-P, Sun J-F, Liu Y-F and Zhu Z-L 2010 Spectroscopic investigations on BH+ (X2 D+)
ion using MRCI method and correlation-consistent sextuple basis set augmented with diffuse functions Int. J. Quantum Chem. 111 2171-9
[34] Guest M F and Hirst DM 1981 Potential-energy curves for the ground and excited-states of AlH+ Chem.
Phys. Lett. 84 167-71
[35] Muller B and Ottinger C 1986 Chemiluminescent reactions of second-row atomic ions. I. Al+ + H2 ^
AlH+ (A2n, B2E+) + H J. Chem. Phys. 85 232-42
[36] Muller B and Ottinger C 1988 The spectroscopic constants of the A2n- and X2E-states of AlH+
Z Naturforsch. A 43 1007-8
[37] Li G X, Gao T and Zhang Y G 2008 The splitting of low-lying or low excited states for hydride molecules
(cations) of the third period under spin-orbit coupling Chin. Phys. B 17 2040-7
[38] Wineland D J and Wayne Itano M 1979 Laser cooling of atoms Phys. Rev. A 20 1521-40
[39] Metcalf H J and van der Straten P 1999 Laser Cooling and Trapping (New York: Springer)
[40] Isaev T A, Hoekstra S and Berger R 2010 Laser-cooled RaF as a promising candidate to measure molecular
parity violation Phys. Rev. A 82 052521
[41] Miller J 2010 Optical cycling paves the way for laser-cooled molecules Phys. Today 63 9-10
[42] Herzberg G 1950 Molecular spectra and molecular structure. I. Spectra of diatomic molecules 2nd edn
Molecular Spectra and Molecular Structure (New York: Van Nostrand-Reinhold)
[43] Hudson E R 2009 Method for producing ultracold molecular ions Phys. Rev. A 79 032716
[44] Labaziewicz J, Ge Y, Antohi P, Leibrandt D R, Brown K R and Chuang IL 2008 Suppression of heating rates
in cryogenic surface-electrode ion traps Phys. Rev. Lett. 100 013001
[45] Field R W and Lefebvre-Brion H 2004 The Spectra and Dynamics of Diatomic Molecules (San Diego:
[46] Berkeland D J, Miller J D, Bergquist J C, Itano W M and Wineland D J 1998 Minimization of ion micromotion
in Paul trap J. Appl. Phys. 83 5025-33
[47] Almy G M and Watson M C 1934 The band spectrum of ionized aluminum hydride Phys. Rev. 45 0871-6
[48] Almy G M and Horsfall R B 1937 The spectra of neutral and ionized boron hydride Phys. Rev. 51 491-500
[49] Ottinger C and Reichmuth J 1981 Chemiluminescent ion-molecule reactions B+ + H2 J. Chem. Phys.
74 928-33
[50] Viteri C R, Gilkison A T, Rixon S J and Grant E R 2007 Isolated core excitation of11BH: photoabsorption in
competition with Rydberg predissociation Phys. Rev. A 75 013410
[51] Hirata S, Nooijen M and Bartlett R J 2000 High-order determinantal equation-of-motion coupled-cluster
calculations for ionized and electron-attached states Chem. Phys. Lett. 328 459-68
[52] Kendall R A, Dunning T H and Harrison R J 1992 Electron affinities of the first-row atoms revisited.
Systematic basis sets and wave functions J. Chem. Phys. 96 6796
[53] Purvis G D and Bartlett R J 1982 A full coupled-cluster singles and doubles model: the inclusion of
disconnected triples J. Chem. Phys. 76 1910
[54] Stanton J F and Bartlett R J 1993 The equation of motion coupled-cluster method. A systematic biorthogonal
approach to molecular excitation energies, transition probabilities, and excited state properties J. Chem. Phys. 98 7029
[55] Koch H, Christiansen O, J0rgensen P and Olsen J 1995 Excitation energies of BH, CH2, and Ne in full
configuration interaction and the hierarchy CCS, CC2, CCSD, and CC3 of coupled cluster models Chem. Phys. Lett. 244 75
[56] Crawford TD et al PSI3: an open-source ab initio electronic structure package J. Comput. Chem. 28 1610
[57] Ruedenberg K, Cheung L M and Elbert S T 1979 MCSCF optimization through combined use of natural
orbitals and the Brillouin-Levy-Berthier theorem Int. J. Quantum Chem. 16 1069
[58] Roos B O, Taylor P R and Siegbahn P E M 1980 A complete active space SCF method (CASSCF) using a
density matrix formulated super-CI approach Chem. Phys. 48 157
[59] Werner H-J et al 2009 MOLPRO, Version 2009.1, a Package of Ab Initio programs
[60] Biskupic S and Klein R 1988 MC SCF study of potential curves of small radicals Theochem--J. Mol. Struct.
47 27-31
[61] Feller D and Sordo J A 2000 A CCSDT study of the effects of higher order correlation on spectroscopic
constants. I. First row diatomic hydrides J. Chem. Phys. 112 5604-10
[62] Petsalakis I D and Theodorakopoulos G 2006 Multireference configuration interaction and quantum defect
calculations on the Rydberg states of the BH molecule Mol. Phys. 104 103-13
[63] Rosmus P and Meyer W 1977 PNO-CI and CEPA studies of electron correlation effects. IV. Ionization
energies of the first and second row diatomic hydrides and the spectroscopic constants of their ions J. Chem. Phys. 66 13-9
[64] Le Roy R J and Kraemer G T 2004 BCONT 2.2: computer program for calculating absorption coef-
ficients, emission intensities or (golden rule) predissociation rates Chemical Physics Research Report CP-650R2 University of Waterloo (the source code and manual for this program may be obtained from
[65] Le Roy R J 2007 LEVEL 8.0: a computer program for solving the radial Schrödinger equation for bound and
quasibound levels Chemical Physics Research Report CP-663 University of Waterloo. (the source code and manual for this program may be obtained from
[66] Janik G, Nagourney W and Dehmelt H 1985 Doppler-free optical spectroscopy on the Ba+ mono-ion oscillator
J. Opt. Soc. Am. B 2 1251-7
[67] Berkeland D J and Boshier M G 2002 Destabilization of dark states and optical spectroscopy in Zeeman-
degenerate atomic systems Phys. Rev. A 65 033413
[68] Le Roy R J 1976 Macdonald R G and Burns G. Diatom potential curves and transition moment functions
from continuum absorption coefficients: Br2 J. Chem. Phys. 65 1485-500
[69] Viteau M, Chotia A, Allegrini M, Bouloufa N, Dulieu O, Comparat D and Pillet P 2008 Optical pumping and
vibrational cooling of molecules Science 321 232-4
[70] Clark C R, Goeders J E, Dodia Y K, Viteri C R and Brown K R 2010 Detection of single-ion spectra by
Coulomb-crystal heating Phys. Rev. A 81 043428
[71] Cooper D L, Gerratt J and Raimondi M 1986 Potential-energy surfaces for the reaction of B+(S-1,P-3) with
H2 using spin-coupled VB theory—asymptotic regions of the surfaces Chem. Phys. Lett. 127 600-8
[72] Curtiss L A and Pople J A 1988 A theoretical study of the energies of BHn compounds J. Chem. Phys.
89 614-5
[73] Ishida M, Toyota K, Ehara M and Nakatsuji H 2001 Analytical energy gradient of high-spin multiplet state
calculated by the SAC-CI method Chem. Phys. Lett. 350 351-8
[74] Alvarado-Swaisgood A E and Harrison J F 1985 Electronic and geometric structures of acandium hydride
cations (ScH+ and ScH2+) J. Chem. Phys. 89 5198-202
[75] Pettersson L G M, Bauschlicher C W, Langhoff S R and Partridge H 1987 Positive ions of the first- and
second-row transition-metal hydrides J. Chem. Phys. 87 481-92
[76] Das K K and Balasubramanian K 1991 Potential-energy surfaces of LaH+ and LaH2+ J. Chem. Phys.
94 3722-9
[77] Glenewinkel-Meyer Th, Kowalski A, Müller B, Ottinger C and Breckenridge W H 1988 Emission-spectra
and electronic-structure of group IIIA monohalide cations J. Chem. Phys. 89 7112-25
[78] Glenewinkel-Meyer Th, Müller B, Ottinger C, Rosmus P, Knowles P J and Werner H J 1991 Ab initio
calculations on the four lowest electronic states of AlF+ and AlCl+ J. Chem. Phys. 95 5133-41
[79] Mochizuki Y and Tanaka K 1999 Theoretical investigation of the GaF molecule and its positive ion Theor.
Chem. Acc. 101 257-61
[80] Yoshikawa M and Hirst D M 1995 A theoretical study of the low-lying doublet states of the molecular ions
GaF+ and GaCl+ Chem. Phys. Lett. 244 258-62
[81] Mochizuki Y and Tanaka K 1998 Theoretical investigation on the GaH molecule and its positive ion Theor.
Chem. Acc. 99 88-94
[82] Boldyrev A I, Simons J and Schleyer P V 1993 Ab initio study of the electronic structures of lithium containing
diatomic molecules and ions J. Chem. Phys. 99 8793-804
[83] Cao Z X, Wu W and Zhang Q E 1998 Spectroscopic constants and bonding features of the low-lying states
of LiB and LiB+: comparative study of VBSCF and MO theory Int. J. Quantum Chem. 70 283-90
[84] Korek M, Rida M and Jbara A 2008 Theoretical calculation of the low lying electronic states of the molecular
ion KH+ J. Mol. Struct. 870 100-5
[85] Magnier S 2005 Theoretical description of the electronic structure of the alkali hydride cation NaH+ J. Phys.
Chem. A 109 5411-4
[86] Radzig A A and Smirnov B M 1985 Reference Data on Atoms, Molecules, and Ions (Springer Series in
Chemical Physics) (Berlin: Springer)
[87] Chanda A, Ho W C and Ozier I 1995 Hyperfine-resolved rovibrational spectrum of the X2 n state of HI+
J. Chem. Phys. 102 8725-35
[88] Kardahakis S, Koukounas C and Mavridis A 2005 First principles study of the diatomic charged fluorides MF
+/-, M = Sc, Ti, V, Cr, and Mn J. Chem. Phys. 122 054312 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.